• No results found

Automatic 3-axis component placement system with visual alignment for mixed technology PCBs

N/A
N/A
Protected

Academic year: 2021

Share "Automatic 3-axis component placement system with visual alignment for mixed technology PCBs"

Copied!
49
0
0

Loading.... (view fulltext now)

Full text

(1)

IT 16 080

Examensarbete 30 hp Oktober 2016

Automatic 3-axis component placement system with visual alignment for

mixed technology PCBs

Vasileios Bimpikas

Institutionen för informationsteknologi

(2)
(3)

Teknisk- naturvetenskaplig fakultet UTH-enheten

Besöksadress:

Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0

Postadress:

Box 536 751 21 Uppsala

Telefon:

018 – 471 30 03

Telefax:

018 – 471 30 00

Hemsida:

http://www.teknat.uu.se/student

Abstract

Automatic 3-axis component placement system with visual alignment for mixed technology PCBs

Vasileios Bimpikas

For the needs of the present master thesis, a Pick & Place machine for through-hole components with insight in three axes was studied and implemented. What motivated this endeavour was the trend to increasingly automated production lines in the electronics manufacturing industry. Certain through-hole elements require further modifications, such as the screwing of heatsinks on them post the placing and soldering.

That implies that a certain distance from the board is ensured when placing and soldering the components, which involves further manual labour for securing the components at the desired height until they are soldered, thus increasing the cost and lowering the productivity. Therefore, the resulting system that was developed, places through-hole components at the desired height. For the needs of this, a stepper motor system, operating in open loop, was placed on a prototype mechanical table that provided motion in three axes using a belt-and-pulley approach for the needs of testing and evaluation. For additional robustness, a vision system was integrated as well. By locating the fiducial markers of the board, it is possible to detect offset in X and Y axes, as well as rotation of the board that was introduced during its placement.

The C code that manipulates the motors was combined with the C++ code of the vision system that uses OpenCV in a GUI for increasing the ease of use

and user-friendliness in general.

The developed system resulted in a positioning accuracy of 0.7 mm, whereas the vision system counteracted the misalignments of sample boards with an accuracy of up to 0.4 mm. A soldering system operating in tandem with the developed placing system has been left as future work, to complete the automated placement of discussed components in desired height, which ultimately eliminates the additional manual work during the PCB manufacturing process.

Examinator: Arnold Neville Pears Ämnesgranskare: Alexander Medvedev Handledare: Raymond Van Der Rots

(4)
(5)

Contents

1 Introduction 1

1.1 Description . . . 1

1.2 Motivation . . . 1

1.3 Related work . . . 2

1.4 Project’s specific challenge . . . 3

1.5 About the company: Electrocompaniet AS . . . 4

2 Theory 5 2.1 Modbus protocol . . . 5

2.2 Vision System . . . 7

2.2.1 Hough Transform . . . 7

2.2.2 Coordinates translation . . . 12

3 Implementation 15 3.1 System overview . . . 15

3.2 Materials used . . . 18

3.2.1 Hardware . . . 18

3.2.2 Software . . . 20

3.3 System modules . . . 22

3.3.1 Motor subsystem . . . 22

3.3.2 Vision subsystem . . . 24

3.4 Graphical User Interface . . . 28

4 Testing and evaluation 30 4.1 Motor accuracy . . . 30

4.2 Vision system accuracy . . . 33

5 Challenges faced 37 5.1 Motor subsystem . . . 37

5.2 Vision subsystem . . . 38

6 Conclusions 39

7 Future work 40

(6)

List of Figures

1 Modbus RTU frame illustration . . . 5

2 Modbus RTU frame timing . . . 6

3 Hough transform parameter space: circle’s center detection . . . 9

4 Sample photograph of board containing fiducial marker . . . 10

5 Picture of board containing fiducial marker after edge detect . . 10

6 A fiducial marker in the Hough Transform space . . . 11

7 The Hough Transform’s accumulator array . . . 11

8 Fiducial marker’s center detected . . . 12

9 The final setup of the system . . . 15

10 A flowchart of the system . . . 17

11 The system’s setup . . . 18

12 Fiducial detection: Focusing on a ROI . . . 24

13 Fiducial detection: Detected markers . . . 25

14 Coordinates translation . . . 26

15 A flowchart of the vision subsystem . . . 27

16 The developed Graphical User Interface . . . 29

17 Pulses per mm graph . . . 32

18 Pulse error graph . . . 32

19 Visualization of the coordinate translation . . . 33

20 Measuring the vision system’s error . . . 34

21 Future work: Addition of the soldering table . . . 40

(7)

1 Introduction

Within the introductory chapter, the motivation which led to the undertaking of the present project is being demonstrated and its goals and scope are presented.

A clear and thorough understanding of the intended outcome should be achieved by finishing this chapter.

1.1 Description

The developed system was intended as a prototype of a pick and place machine which can assess the placement height prior to the soldering of electronic com- ponents in cases where the distance from the board itself is of interest and can eliminate additional manual work.

Within the present project’s scope, control over the system’s motor has been achieved by the development of a software driver able to communicate with the controller, which then communicates with the motors’ driver circuits. The accuracy of said motor system has been observed and measured, by means of intended and achieved locations. The ability to successfully place components using the developed system has also been assessed and the visual detection of potential misalignment of sample boards has been achieved using the developed vision system.

Time did not allow the development of the selective soldering part of the system, for the soldering of the placed components and this has been left as future work.

1.2 Motivation

Currently, surface-mount technology (SMT) components on electronic boards are produced automatically following the process of Paste mask → Pick & place

→ reflow oven. Pin-through hole (PTH) components follow the process Pick

& place → wave soldering. However, this requires that boards do not have PTH and SMT components on the bottom side, as the wave soldering would otherwise remove the SMT components [1]. The most popular method of ad- dressing this issue is the method of mini-wave selective soldering, where molten solder is pumped out of a nozzle which moves according to the above mentioned coordinate system under the board [2]. Selective soldering is a relatively new technology, which meets the need of PTH components in PCB designs and also the need of moving away from the conventional wave soldering process, which can damage the SMT components.

Many boards do have this technology combination, since the PTH technology cannot be entirely replaced by SMT, and that is not predicted to change in the near future [3], due to the fact that PTH are highly desirable when it comes to components that require relatively high amounts of power to operate (e.g.

digital displays, capacitors etc.), and in other cases, SMT components may not be available for certain type of components or, when they are available, their cost is very high [4]. Therefore, quite a bit of manual labor is required for placing

(8)

and soldering of PTH components. Furthermore, placement of such components can be critical, such as the case where one has to mount transistors that are later to be screwed on a heatsink.

During the component placement procedure, the automatic placement sys- tem takes into consideration the X and Y axis. For certain components, an insight of the Z axis (height) is needed to avoid additional labor work, such as the above mentioned case of transistors that require further modification, post to the assembly procedure. Currently, a mechanical fixture is used to hold those through-hole components that are sensitive to X,Y,Z placement in place. The number of fixtures is dependent on the amount and arrangement of through-hole components. E.g. a row of transistors would be mounted on a single fixture.

The fixtures are then removed after the soldering is complete. This requires a fair amount of labor work in a rather automated procedure. Therefore it is clear that a solution is needed that can selectively place PTH components in defined X, Y, Z positions on a board. Since such system is to be used in a production en- vironment, it will need to be robust, consistent and flexible in terms of handling different PTH components (as well as component-to-component variation).

1.3 Related work

Placing and soldering

There are mainly two types of placement machines: sequential pick-and-place (PAP) and concurrent chip shooter (CS) [5].

A PAP machine first navigates to an immovable feeder’s location in order to pick up the part, and then reaches to the target location to place it. The board is placed on a fixed location on a X-Y table. This setup allows the placement of one component at a time. PAP machines are intended for relatively big components and can achieve high accuracy and stability due to their sequential approach.

CS placement utilizes a number of mobile feeders, and places the components through a rotating placement head, with several nozzles. As a result of its concurrency, this type of placement machine can achieve short job completion times. It is usually proposed for smaller components, such as surface mount resistors or LEDs.

When it comes to mixed technology PCB, which is the target of the present project, it has been described that selective soldering needs to be utilized for the soldering of the components, in order not to damage the other components.

Within the industry, the current technique is to place a shield (or ”mask”) in place for the soldering to reach only the designated spots on the board, while maintaining the PTH components in place. The Shield and method for selective wave soldering [6] by Sr. Curtis C. Thompson of Micron Electronics, Inc.

describes this exactly; ”A shield for use in wave soldering processes to selectively affix solder to an area of a circuit board having an electronic component thereon which comprises” as well as ”The thickness of base member and the depth of any recess can be altered to adjust both for component height and for desired thermal shielding characteristics.” This technique can fix the components in place, in

(9)

the desired height and allow for soldering that will not damage the nearby components. However, this still requires the rather cumbersome procedure of

”shielding” every PCB before placing and soldering begins.

Other proposals have also been made and patented throughout the industry.

In Power heatsink for a circuit board [7], Roden and Heseltine address a way of integrating the additional manual work that is required when heatsinks have to be used with power elements, by integrating a monolithic heatsink to the components prior to placement and soldering, instead of following more manual ways, for example using the aforementioned shielding technique or manually installing said components. This can indeed eliminate the problem, albeit it is mentioned by the authors that ”it is also of some importance that a pick-and- place machine be able to manipulate the heatsink relatively easily, and that the heatsink and the device to be mounted on it both be in a configuration to accept soldering in an oven”. Integrating the heatsink to the component can indeed eliminated the further steps that are required, but it would require further enhancements to the pick and place machine to handle the heatsink and would thus further complicate the process.

Visual inspection

Regardless of the actual placement techniques, and since the electronic manu- facturing industry has grown to be highly automated, vision systems have been introduced as a means of further automating the process and reducing manual work. Hata explains that visual inspection is of increasing importance to the industry, along with the various types of these systems, which involve board positioning inspection, soldering inspection as well as PCB pattern inspection and mounted SMT inspection systems. [8]

Nian and Tarng [9] propose a 3-axis vision alignment system to counteract the slight misalignments that are to be expected during the production pro- cess, when the steps are automated. A mobile vision system, with freedom of movement in three axes, locates the fiducial markers of the PCB. By relating the current coordinates of the fiducials to the position of the vision system, thus determining the absolute coordinates. Then, by rotating at a small an- gle the procedure is repeated. Using the two sets of coordinates, the center of rotation can be determined, as well as the x and y offsets. This results to the required compensation when navigating to locations of the boards. It is stated in this research that ”...the automatic system can perform the alignment task more accurately and efficiently than manual operation”, something that increases productivity and reducing manual work during the process of PCB assembly.

1.4 Project’s specific challenge

Despite the fact that solutions towards three-axial pick & place systems are widely and readily available in the market, with a number of these been indica- tively presented in the previous section, some manufacturers still face the time and man-hour consuming task of taking additional care when faced with the

(10)

above mentioned cases, i.e. when additional modification of soldered compo- nents needs to take place after the place and soldering. The unique aspect of the present work is that it accounts for the Z-axis just as much as the X and Y-axes.

This report demonstrates that a well developed solution, such as a pick &

place machine, can give a solution to side problems of the procedure that they carry with little modification and some additional design.

1.5 About the company: Electrocompaniet AS

Electrocompaniet is a manufacturing company of high-end audio systems based in Tau, Rogaland, Norway. Their products have the live music’s sound as a base line, trying to translate what people actually hear to design specifications and the mathematics behind this, rather than basing the design on strictly technical terms and knowledge.

It was founded in 1972 by Svein Erik Børja, after a presentation of Transient Intermodulation by Dr. Matti Otala at a conference [10]. He then transferred the theory behind Otala’s papers into transistor amplifiers prototypes and the results were promising.

Today, Electrocompaniet still produces audio systems, adapting to the cur- rent trends of wireless audio streaming and online streaming services, as well as maintaing the classic line of audio amplifying devices that were the inspiration for the founding of the company. Both the design and manufacturing takes place on the company’s main office and factory, leading to the need of researching into new electronics manufacturing equipment for their assembly line. That lead to an opportunity of undertaking the current project in an actual industrial and professional environment.

(11)

2 Theory

This section covers the background theory that was considered throughout the development of the present project. It covers areas that are necessary for the reader to comprehend in order for him or her to be able to understand how the implementation actually works.

The areas that are covered are the communication protocol, which is used to pass commands from a PC to the motors’ controller, and the theory behind the vision system; how the alignment of a board is detected and how the methods used to do this actually work.

2.1 Modbus protocol

The Modbus protocol is an open communication protocol, positioned on the 7th (application) layer of the OSI model. It was first released by the German company Modicon (now known as Schneider Electric) and it allows inter-device communication regardless of the communication medium and network that the devices are operationg under [11], for example through serial media (Modbus Remote Terminal Unit or RTU), such as RS-232 and RS-485, as well as through using the Ethernet interface (Modbus TCP/IP). It is a master-slave protocol, with one master controlling up to 247 slaves [12].

Modbus is very popular among manufacturers within the automation and industrial area, mostly due to its simplicity; a simple 12-byte frame is trans- ferred at any given time (more if the payload is larger, but still a rather small data frame is utilized by Modbus). It is a dominant protocol, as almost every manufacturer is using it, in one form or another [13].

A Modbus message frame consists of mainly 5 modules: a) the slave ad- dress, b) the function to be performed, c) the start address of the data to be manipulated, d) the offset from the starting address (essentially the number of registers) and e) the cyclic redundancy check (CRC). The end of the frame is expressed by silence for 3.5 times the character time [14]. In case of a writing command, the data to be written are also inserted into the frame. Typically, the master addresses one slave at a time. The slave’s acknowledgment is iden- tical to the master’s query, and in the case of reading requests it also carries the data. There is the special case of ”broadcast”, where the master requests a write operation to all of the slaves in the network. No acknowledgment is transmitted from the slaves in this case [14].

The defined functions allow a number of operations to take place. The main functions’ codes are as follows [15]:

Figure 1: Modbus RTU frame illustration

(12)

Code Function

0x1 Read coils

0x5 Write single coil

0x15 Write multiple coils

0x03 Read holding registers

0x06 Write single register

0x16 Write multiple registers

Table 1: Modbus functions codes. Coils are single bit (boolean) variables, and registers are an word (integer) variables. In Modbus frames, the Least Signifi- cant Bit (LSB) of a byte is transmitted first, followed by the Most Significant Bit (MSB)

Figure 2: RTU frame transmission: starting and ending with 3.5 character time of not transmitting anything. Any waiting time above 3.5 (4.5 is shown

here) will denotate the start of a new frame [16]

The frame is altered for the depending on the function. In case of a read query, no data are transferred from the master to the slave, but rather the other way around. Similarly, in case of a read/write operation on a single coil or register, the number of registers field is not included in the frame. Of course, the payload of the frame (data) varies according to the data type that is handled at each time. Below there is an example of the master device requesting the slave address with address 1 to write the value 3 to the register #1014 :

0x01 0x06 0x03 0xF6 0x00 0x03 0xA8 0x7C Each hexadecimal digit can be explained as follows:

• 0x01: Slave address (1)

• 0x06: Function code (write single register)

• 0x03 0xF6: The register’s address, (1014)10 or (3F6)16

• 0x00 0x03: Value to be written, (3)10or (0003)16

(13)

• 0xA8 0x7C: CRC, (7CA8)16

Note that, as mentioned above, the LSB is transferred before the MSB.

2.2 Vision System

For detecting whether the placed board is misaligned or not, as well as trans- lating the target coordinates in case a misplacement is encountered, a vision system was designed and implemented. By utilizing a camera and Open Com- puter Vision (OpenCV), a library for real-time computer vision computing, the fiducial markers of a board were successfully identified, allowing the software an insight on the board’s current positioning opposed to a ”correct” one, which was measured and recorded by placing the board correctly on the mechanical table on the ideal position. Practically there will always be misplacements when operating on a real world production line, a fact that makes the implementation of a computer vision system a necessity.

Historically, the early computer vision systems relied on the identification of various items on a board, such as pin holes, or the edges of the board itself [17]. In modern PCB manufacturing, fiducial markers have been introduced as a standard, both for board placement check as well as for parts that require a higher degree of accuracy. Fiducial markers are most commonly of circular shape, as it has been proved to provide better results to machine vision systems [17].

2.2.1 Hough Transform

The circular shape of the fiducial markers makes them easily recognized by com- puter vision algorithm. That happens due to the use of the Hough Transform (HT) algorithm. The HT is very popular method for shape recognition within the computer vision field [18], and is often unparalleled in accuracy and robust- ness, making it almost a default choice [19]. HT was first used for the detection of straight lines, introduced by Duda and Hart [20]. Since then, it has been used for the detection of circles [21], as well as for ellipses [22] or objects of any shape [23].

For the utilization of the HT detection technique, first the studied image is converted to grayscale, with an edge detection following. By applying edge de- tection, discontinuities due to slight brightness differences or other mismatches are eliminated, and the boundaries of discrete objects are much more apparent.

After the detection of the edges, it is possible to estimate the parameters of the shapes defined by the edge boundaries.

It is known that a circle is described by the following equation

(x − x0)2+ (y − y0)2= r2 (1) where x0, y0 is the center of the circle and r is its radius. Upon the per- formance of the edge detection, there are n pixels belonging to boundaries of

(14)

objects on the photo. There are three parameters in this equation that can define a circle: the center (x0, y0) and the radius r. Therefore (1) can be pa- rameterized as follows

x = x0+ rcos(θ)

y = y0+ rsin(θ) (2)

where θ ranges from 0to 360and draws a full circle of radius r. Three parameters means that the HT takes place in 3D space, which makes it com- putationally expensive [24]. By fixing the radius to a single, known value, the computations can be done in two dimensions. That is the case throughout the present project, since the camera is on a fixed height above the board and the fiducial markers’ parameters are an industrial standard. Therefore, the radius of the desired circle is known.

With two parameters (the position of the center of the circle), each point of the edge detected image can potentially be the center of a circle of the (fixed) radius r, simplifying (2) to

x = x0+ cos(θ)

y = y0+ sin(θ) (3)

So, suppose we have a circle consisting of edge points as shown below:

In the Hough parameter space every point of the circumference notates a circle’s center of the known radius (here, 4 random points are shown for simplicity):

(15)

Each one of these points is an element of an array known as the accumulator.

The accumulator array is essentially an array of counters where each edge point of the image is represented. Every possible object of the target parameter (here, a circle of radius r) passing through increment the accumulator value of this point by giving it a vote. In the end of the process, a point with a high number of votes is indicating a highly likely occurrence of the target shape detected in the image. In the above figure, the points where the red circles intersect will get a vote in the accumulator array. All four circles intersect in the center of the blue circle, as indicated below:

Figure 3: Hough transform parameter space: circle’s center detection The edge point indicated with the green dot will have the higher number of votes in the accumulator array, indicating a high probability of a circle of the desired radius to be present there, with this point as its center.

Applying the above mentioned to a real and relevant to the present project example, we could, supposedly have the following image, acquired by the sys- tem’s camera:

(16)

Figure 4: Sample photograph of board containing fiducial marker Using edge detection, we can detect the boundaries of the objects displayed in the above image. The result would look as follows:

Figure 5: Picture of board containing fiducial marker after edge detect With the radius known, since it is assumed that the camera is located in a fixed position in relation to the board under inspection, we can apply the HT. Every edge point on the above image will represent a circle of this radius. This is

(17)

visualized in the following figure:

Figure 6: A fiducial marker in the Hough Transform space

The HT will perform the voting function, where every edge point will get prob- ability points leading to the most highly indicative point of the image where the desired shape of the given parameters is likely to be found. The accumulator could be plotted as follows:

Figure 7: The Hough Transform’s accumulator array

In Fig. 7, the edge points can be found on the x-axis (set of two values - coordinates) and the votes are shown on the y-axis. By locating the absolute maximum of the accumulator array, we extract the information that a circle of the given radius is likely to be found in the edge point with coordinates

(18)

(110,113). Drawing a marker on the location of this point visually confirms that this is indeed the center of the fiducial marker.

Figure 8: Fiducial marker’s center detected

For a picture containing either multiple fiducial markers or a single fiducial marker coexisting with other circular shapes that could falsely qualify as can- didates for a fiducial marker, an approach for finding the local maxima must be applied, followed by disambiguation techniques for identifying the actual, desired marker. Instead of following this approach which would add a compu- tational overhead to the application, a method focusing on a number Region of Interest (ROI) has been implemented.

Given a picture with n fiducial markers captured, n ROIs are created around the place where it is expected to find them, according to the current board’s specifications. Assuming that displacements of up to a few millimeters will take place, the ROIs are very small and the computational cost is negligible.

If accounting for relatively high probable misplacements is the case, the ROI’s dimensions can be increased, in the cost of computational time. Even in such case, the computational overhead added by processing the whole image is not even comparable to the ROI approach, lest adding a local maxima acquisition method.

2.2.2 Coordinates translation

With the location of the circles acquired, the translation of the coordinates must be performed in cases where the board is misaligned. That leads to two sets of points, namely A and B, with A being the set of the original points read from the Bill of Materials (BOP) file and B the set of points of the placed board

(19)

in the certain position at a time. That poses the problem of transferring the coordinates from one reference point to another, so that the original coordinates will match the component positions with the board being misplaced.

To move a set of points from one reference system to another, it is necessary to perform the rigid body transformation [25]:

B = R · A + t (1)

where R is the rotational matrix and t the translation matrix. So the problem is to find the R and t that minimize the error E, which is [26]:

E =

n

X

i=1

||Bi− (R · Ai+ t)|| (2)

with Ai and Bi representing the original and current points at a time, respec- tively.

There are many approaches to the solution of this problem. The Singular Value Decomposition (SVD) solution that was used in this project was proposed by K. S. Arun, T. S. Huang, and S. D. Blostein [26], and compared to other methods:

“An iterative algorithm for finding the solution was described in Huang, Blostein, and Margerum [27]; a non iterative algorithm based on quaternions in Faugeras and Hebert [28].“

The comparison showed that the SVD method is faster for a larger amount of points over the other methods, therefore it was a better choice for a system that could potentially require that amount of points, making it more future-proof.

Summarizing Blostein’s et al [26] solution, it can be broken down to the following steps:

1. Calculate the centroids for both of the sets of points 2. Compute the optimal rotation matrix R that minimizes E 3. Calculate the translation matrix t

First of all, the centroids are calculated as follows:

A =¯

n

X

i=1

Ai

B =¯

n

X

i=1

Bi

(3)

Now let:

ai= Ai− ¯A

bi= Bi− ¯B (4)

(20)

It can be proved that the point sets will have the same centroid for the correct solution of (2) [27], i.e ¯B = ¯B0 = R · ¯A, where ¯B0 is the centroid of the point set after it has been translated with the optimal R, t, meaning

0=

n

X

i=1

Bi0= R · ¯A + t (5)

Now, by applying (3) to (2) we have:

E =

n

X

i=1

||bi + ¯B − R · ai− R · ¯A − t|| (6) Because of (5), and since ¯B = ¯B0, the error to be minimized is finally simplified to

E =

n

X

i=1

||bi− R · ai|| (7)

To compute the rotation R, we first need to calculate an accumulator matrix

H =

n

X

i=1

ai· bTi (8)

Where t stands for transposition. H is the familiar dispersion matrix. By calculating the SVD of H:

H = U SVt (9)

we can acquire the rotation as follows:

R = V UT (10)

Finally, the translation can be computed:

t = ¯B − R · ¯A (11)

Therefore, it is now possible to transfer a set of points, namely the coordi- nates of the components to be placed on a board as extracted by the Bill of Materials files, which would be the ”correct” set of coordinates, to match the actual coordinates of a board that has been misplaced, by transforming the orig- inal coordinate system to match the current one, and not vice-versa, something that would require manual work (adjusting the board manually).

(21)

3 Implementation

3.1 System overview

The implemented system consists mainly of three modules (or subsystems):

1. Central unit (Intel NUC NUC5CPYH) 2. Vision subsystem

3. Motor subsystem (MIC488 controller, SMC124 drivers and stepper mo- tors)

The Intel NUC, essentially a mini PC, serves as the central unit of the system. Its responsibilities are merely computational; it is performing the file I/O, where the desired Bill Of Materials file (.csv format) is parsed to extract the coordinates of the components to be placed. After that, it gets a visual feedback of the current alignment of the target board through the visual system, and performs the coordinate translation upon the the necessary image processing to determine the current location of the board’s fiducial markers. With the suitable coordinates available for the current setup, it sends the necessary commands through the serial port, acting as a Modbus master, to the controller of the motor subsystem.

Figure 9: The final setup of the system

(22)

The vision subsystem gets as an input the coordinates extracted from the BOM file. Its purpose is to first convert the fiducials’ coordinates into picture (pixel) coordinates. This is possible because the camera is in a fixed place above the board, with a fixed focal length of the lens. Also the location where the board’s upper left corner (the beginning of the coordinate system of PCBs) is indicated. Therefore, the number of pixels per mm can be calculated and used to find where the fiducial markers ideally should be, and perform a Hough transform on the area around that point, in order to locate where they actually are. With both the indicated and the actual location of the fiducial known, a rigid transformation is performed and the rotation and offset are acquired, to be used for translating the coordinates of the components to be placed.

The motor subsystem, operating as a Modbus slave, is getting the coor- dinates through the serial port. Upon the reception of such a command, it navigates to the target location and lays the component. That is followed by the acknowledgment to the central unit that the component has been placed, and the procedure starts over again, until all the components from the list have been placed. During its initialization, on system startup, the motors perform the homing function, eg moving until they reach the limit switches. When that happens, they reset their positions. The placer is then moved to the predefined start of the board (upper left corner of the PCB), and wait for the translated coordinates of the components to be placed, acquired by the translation that takes place within the vision subsystem.

Additionally, and since the height of the placement is of crucial importance within this application, the .csv parser identifies the component’s package as well; the distances between the body and the pins, as well as the offset intro- duced by the grabber are known, therefore the height parameter that will be passed to the third motor varies according to the packaging.

(23)

Figure 10: A flowchart of the system. After the initialization, the vision system takes the coordinates as input and exports the translated coordinates to the motor system, after identifying and accounting for the displacement of the board at the current time. The motor system then navigates the placer on the designated locations and places the components

(24)

Figure 11: The Intel NUC extracts the coordinates from the BOM file, performs the coordinate translation using the visual feedback from the vision subsystem and sends the commands to the motor subsystems which performs the placement on the mechanical table

3.2 Materials used

3.2.1 Hardware Motors

For the actuating needs of the project, stepper motors were used to drive the placement nozzle through the mechanical table, one for each axis.

Stepper motors are electromechanical motors that convert the current sup- plied to them (on their stator) to motion on their shaft (rotor) and in extend, to anything attached to it. They can be used for applications that require the control of the speed, acceleration or position of a moving device. A stepper mo- tor’s stator consists of coil windings, or phases. By providing a pulse to every phase in a sequential matter, the rotor moves, one step at a time.

The choice of this particular type of motors was made after considering that, without adding to the complexity of the system as a whole (but rather, reducing it), sufficient accuracy can be obtained for the needs of this system.

(25)

Stepper motors can operate in an open loop, meaning there is no feedback loop, something that comes with an increase in the system’s cost but also in its complexity. That is the actual reason that ”80-90 percent of stepper motor applications are open loop incremental motion control applications” [29]. When operating in open loop, the motors’ angular step of a motor actually becomes its precision [30]. As already mentioned, a stepper motor is driven by digital pulses, something that makes microstepping possible; by microstepping, it is implied that instead of a full step (1.8or 0.9usually) for a pulse, the driver breaks down a pulse to smaller ones, so that the motor can reach any position between two steps. Depending on the step size, the positions between two steps can be divided to 8, 16, 32 and so on. By using microstepping, an accuracy of up to 10 µm can be achieved [31].

Additionally, since stepper motors’ error is noncumulative [30], which means that a momentary sliping of the rotor will only affect the precision once and not in the whole operation incrementally, and the maximum achievable precision satisfies the needs of the project, it was chosen over more expensive and complex solutions (such as servo motors).

Motor drivers

To drive the stepper motors with high accuracy and precision, the SMC124 driver circuit by WObit [32] was chosen. This choice was made over the fact that this specific driver allows microstepping, so that the motors’ accuracy can match the projects needs; the range of the offered step resolution spans from 1/2 to 1/128, which means that it can drive the motors with a step of up to 0.07.

Another fact that made the SMC124 an interesting choice, is that, in tandem with the required microstepping, it also offers current reduction. Since the maximum supplied current far exceeds that of the motors, it can be used to supply with the power they need, but, the reduction feature can be utilized in future work, e.g. implementing a smart circuit that adjusts the supplied current when the motors are at rest for example, reducing the overall power consumption (and as a side-effect, the noise), leading to a more environmental friendly, ”green” application.

Controller

Although no propriety hardware or software connection is required to use the above mentions drivers, the controller chosen is also manufactured by the same company, WObit. The MIC488 programmable motion trajectory controller [33] is a programmable controller than can control up to 4 motor drivers, and can act as a Modbus slave. Its durable design and the use of the Modbus protocol, which is a de facto communication, as previously described in 2.1, make the MIC488 a device of industrial specifications and suitable for this project.

Its function set meets the project’s requirements, as it provides the ability to set the velocity, acceleration and position of the motors, as well as reading the current values of these quantities. Additionally, it automatically converts

(26)

the pulses read from the driver into more useful units (e.g. mm).

Furthermore, by providing 8 inputs and 8 outputs, the homing function of the motors can be seamlessly implemented, leaving unused ports for future expansions. Moreover into the future work, the controller allows support for up to 4 motors, which leaves a slot for an additional motor; potentially, this can be utilized in the future, for i.e. making the vision system more autonomous.

Intel NUC computer

The Intel NUC NUC5CPYH [34] was chosen as the central processing unit of the system, acting as a Modbus master to the controller slave and carrying out the computationally demanding processes. The NUC5CPYH is a mini PC, capable of offering the full features of a standard computer, since it is running on a dual-core processor, supports DDR3L RAM and SSD storage, as well as the standard peripherals needed, namely USB 3.0 and 2.0 ports plus wired and wireless networking interfaces.

The motivation behind this choice was mostly based on two factors: mini- mizing the processing time of some computationally expensive operations and bringing a more user friendly environment to potential operators of the system, since the NUC5CPYH supports all the popular Operating Systems.

Since the system utilizes a vision system which is performing some image processing tasks, the CPU overhead is significant for any embedded platform, even after code optimizations. Apart from the processing power, the ability to choose an OS (mainly, a choice between Windows and Linux distributions) can make it very user friendly, since the developed code is not platform-depended.

Additionally, as a mini PC, it is very small and lightweight, a fact that makes it highly portable and attachable to an industrial setup, which is the target environment of the present system.

3.2.2 Software

Programming language

Generally, the programming needs of the projects were covered by using C++.

Although the C language seems to be a de facto choice for embedded applica- tions, since it is the clearly prevalent language by a great amount when it comes to this kind of systems [35], the related to it and object oriented C++ benefited the developing process in a number of ways.

More specifically, the fact that the system is consisting of various and dif- ferent modules (motor controller driver, vision system etc), is making it rather heterogeneous. In terms of coding, using a structured programming language such as C for the needs of a project like the present one, would mean that the length of the code would be significantly long. The need to combine the different modules in a more abstract way, in contrast to plain C, led to the decision to choose C++ as the developing language.

Furthermore, the adaption of OpenCV for the vision needs of the system and Qt framework for the GUI development (both described in the coming paragraphs), makes it an obvious choice, as both tools directly support C++

(27)

and they inherently work in an object oriented manner, making code developed in C++ very easily integratable with them.

Apart from the aforementioned reasons, and in a more general aspect, an object oriented code is customarily more elegant due to the encapsulation of both data and procedures in objects, a fact that makes the code essentially easier to maintain, adding therefore the future-proof aspect.

Motor controller interface

For communicating with the motor controller and having control over the mo- tors, the open-source library libmodbus [36] was utilized. It was developed by St´ephane Raimbault and provides Modbus interface through either serial port or Ethernet. It provides an appropriate backend for the interface chosen, so that the level of abstraction from the programmers perspective eliminates the need to explicitly handle the serial port programmatically. Combined with its open-source character and seamless function, it was a perfect choice for the project.

OpenCV

For implementing the vision subsystem, the world’s currently most popular open-source library for computer vision [37] was used. The reasoning is straight- forward, since it is very powerful with a strong community and more than enough features in processing for this project’s needs. To indicate the robust- ness and importance of OpenCV among the scientific community, it has been used for medical applications [38], withing the ubiquitous computing area [39]

and in security applications [40], among others. Furthermore, according to its development team, well-established companies like Google and Microsoft, have used OpenCV in their applications [41].

It was therefore the most viable solution for the vision needs of the system.

During the development process, other tools, commercial or not, such as MAT- LAB and Octave were used for image processing but deploying the final project with the use of those tools did not seem a feasible option at all, mainly due to the processing overhead added by their complexity. Additionally, OpenCV integrates neatly with the overall code, as it directly supports C++, readily offering an API. That made the development and integration with the other modules rather harmonious.

Qt framework

The developed solution was made a lot more friendly to the user upon the design of a Graphical User Interface (GUI). For the development of the latter, the Qt framework was chosen. The main reason behind this decision was that there is no abstraction from the OS’s native APIs that the software is running, something that makes the performance factor to reach very satisfactory levels.

That has a cost in platform dependency, as it may require that some parts of the code would have to be rewritten in order for it to be able to deploy to another platform. Despite that fact, the developed GUI has been tested under the X11

(28)

platform (Linux) as well as under Apple’s OS X with the necessary adjustments being next to nothing.

Another reason that made Qt a very appealing choice, is the way that it handles events (mouse clicks, typing etc). Qt uses an event handler named signals and slots. This essentially is a communication method between classes;

several actions may be triggered by an even in the user space, firing a signal, which can contain additional data, on top of the information that the certain event that they are connected to has occured. Signals trigger functions, in the same or another class, which process the information and make the necessary adjustments and/or giving the feedback back to the user.

This robust event handler made possible for transferring data by inter-class communication, such as transferring the translated coordinate sets from the camera class (where the image processing takes places) back to the motor con- troller and wait upon the user’s confirmation to start the actual placement process, again with a signal and slot approach.

3.3 System modules

3.3.1 Motor subsystem

The motor subsystem is responsible for handling the operation of the three motors, one for each axis the system is able to move across. The main component of the motor subsystem is the MIC488 Motor Controller. The MIC488 has control over the driver circuits of the motors, meaning it can start and stop the motors, give them a target destination (in pulses or other unit specified, given that the pulses to unit number has been stored in the driver’s memory), fetch their status and set the parameters such as velocity, acceleration and deceleration, and immediately stop the motors if necessary.

For the development of the software that communicates with the motor controller, the open-source library libmodbus was used. The library allows for serial (RTU) and TCP (Ethernet) communication with modbus devices, and is distributed under the LGPL v2.1+ license. [42]

Based on the libmodbus library, which provided the functionality needed, such as modbus frame creation, CRC checking, read an write the registers, a C++ class was composed, containing all the necessary functions implemented.

The following list gives an overview of this class:

• motorController(constchar∗, int, char, int, int, int)

The class’ constructor. It initializes an object of the motorController class.

The parameters are: device (serial port), baud rate, number of parity bits, number of data bits, number of stop bits.

• ˜motorController()

The class’ destructor. Used to delete the created object.

• int initController(int)

(29)

Initializes a slave device. This function accepts as input the slave address of the device. The physical address (serial port) of said device has already been passed to the constructor, and the present function assigns the input slave address to this location. It returns the error code: 0 for no errors, -1 for errors and also prints the error message.

• void motorsOn(bool OnOf f )

Turns the motors ON or OFF, accordingly with the boolean input.

• void setP osition(uint16 t M, f loat pos)

Sets a target position for a motor, which is immediately set to motion towards it. The motors are assigned to numbers using preprocessor pa- rameters. The desired position is passed in the function as a floating point variable. The target register is set based on the selected motor. The de- sired position is then converted to modbus format (something that means that the LSB comes first), and finally the function writes the converted value to the appropriate register on the slave device.

• void setM axV elocity(uint16 t M, f loat vel)

Sets the maximum velocity, given as a floating point value, that can be achieved by a motor (first argument). The process is identical to the setP osition function, excluding the address of the register that the value is getting written to.

• float getP osition(uint16 t M )

This function returns the current position of a motor, the identifying num- ber of which is passed as the only argument. The appropriate registers, according to the desired motor, are read and the value is put into a tempo- rary variable. The 2-byte value is converted to float and then it is returned by the function.

• int getStatus(uint16 t M )

Reads the appropriate register for a motor (input) and returns the status.

The possible return values are the following: [33]

– 0: drive turned off (EN signal inactive)

– 1: drive turned on, no motion (EN signal active) – 2: drive in set velocity mode

– 3: drive in motion to set position mode – 4: drive achieved the set position – 6: drive in homing mode

– 9: drive achieved limit position L while motion towards negative position value (by program or proximity sensor signal KL)

(30)

– 10: drive achieved limit position R while motion towards positive position value (by program or proximity sensor signal KR)

• void home(uint16tM, f loat vel)

Performs the homing function for the selected motor. The float param- eter designates the desired velocity to be achieved for reaching the limit switches.

• void motorsStop()

Immediately stops all the motor motion.

By creating an object of this class, the programmer takes control over the entire motor subsystem. The object oriented approach makes the future work much more feasible, as all the functionality for a future expansion of an auto- matic soldering table already exists.

3.3.2 Vision subsystem

The vision subsystem consists mainly of the camera, and is responsible for the detection of the fiducial markers and the translation of the coordinates based on this detection. It mainly operates image processing routines and can be summarized, in a more abstract level, as getting the BOM file as an input and exporting the actual coordinates that the system needs to navigate to.

Figure 12: After the initial snapshot from the camera, the more intense image processing are performed on small ROIs around the ideal location of the fiducial markers, in order for the process to be more accurate and faster

For the vision subsystem to work properly, a basic assumption has to be made: the camera is fixed in a certain location and is not at all mobile. This ensures that the distance between the board and the lens is always the same and known. Furthermore, the focus has to be constant as well, and this is taken care of programmatically. Having the position of the camera fixed allows the system to convert distances from pixels to mm and vice versa, which is of crucial importance ultimately, as it gives the correct distance that the motors have to cover to reach a location of interest, as soon as the placement procedure starts.

(31)

Figure 13: The results of the detected fiducial markers visualized In order for the system to maintain its modularity, the vision subsystem was also implemented as a C++ class. The class inspectionCamera encapsulates all the necessary sub-routines for board misplacement detection to be achievable.

A listing of the class’ functions and a description of them follows:

• inspectionCamera(int)

The class’ constructor. It creates an object of type inspectionCamera.

The integer parameter refers to the number of the desired camera to be used as the inspection camera. The enumeration of the system cameras is taken care of by OpenCV.

• Mat captureImage(intf ocus)

This function returns a snapshot from the object’s camera, of type M at, which is OpenCV’s matrix variable. The focus can be set manually as a parameter, although as far as the camera used for this project’s purposes is concerned, the focus is of undefined unit, and a value that serves the best result has to be picked manually.

• int locateF iducial(M at src, f loat targetX, f loat targetY, int minRadius, int maxRadius) This is the function that locates the fiducial markers. The input src is

the snapshot provided by the captureImage function. The targetX and targetY parameters are the coordinates of the location that the fiducial marker is expected to be found. minRadius and maxRadius set the range of the size of the circles that the Hough transform will look for. After that, the image is cropped into an area of interest, which is a rectangle of configurable dimensions, centered around said expected location. The Hough transform is performed only at the area of interest, which makes the whole procedure a lot faster. The detected circle’s center coordinates are stored in a public variable, so that it is accessible from the main program.

• void coordinatesT ranslation(f loat coordinates[][2], int numComp) The translation of the components’ coordinates takes place through this function. First the rigidT ransf orm is performed, using the correct and actual positions of the fiducial markers, resulting in the computation of the rotation and the offset. With these two values, all the coordinates of the

(32)

components to be places are translated and their actual location, based on how the board is positioned at the time, is computed and stored for the motor subsystem to use them in the motor navigation and eventually the actual placement.

• void rigidT ransf orm(M at, M at, M at&R, M at&t)

Serving as a sub-routine of the coordinatesT ranslation function, rigidT ransf orm calculates how much rotation and offset the board is currently under; this

is done by comparing a number of points and their current position, op- posed to where the ”proper” position has been set. The background be- hind this procedure has been thoroughly described in section 2.2.2.

• void boardT oImageCoordinates()

As explained earlier in the present section, the camera’s location and focal length are always fixed; that allows the transition from pixels to mm.

Therefore, and since the location of a fiducial is acquired from the BOM, such conversion is necessary. This function converts the mm coordinates to pixel coordinates, for the locateF iducial function to know where exactly in the picture to look for the fiducial markers.

Figure 14: A visualization of the coordinates translation (rigidT ransf orm()).

Due to the upper left corner of the board misplacement (the ideal position is indicated by the green cross on the subpicture) the expected location of the com- ponent is shown with the blue circle. After accounting for the misplacement using the vision system, the green circle indicates the results that this approach gave us

(33)

Figure 15: The vision subsystem. Getting the fiducial and component coordi- nates, it first converts them to picture (pixel) coordinates. It then looks for the fiducial in the area around the location where they should be located. After that, the rigid transformation returns the rotation and the offset, and finally the components’ coordinates are translated

(34)

3.4 Graphical User Interface

In order for the overall implementation to become more user-friendly and ac- tually emulate what an operator of such machine would use, a GUI was devel- oped. Its use is rather straightforward: when the program is launched, it takes a snapshot from the camera and displays it in the window, for the user to get an overview of what the camera can actually see at that moment.

For the procedure to start, the user has to select a Bill Of Materials file (.csv format). That is possible by either typing (or pasting) the full path to the file, or pressing the button to go to the file navigation and select the file from there.

By pressing ”Load”, the BOM file is parsed and the number of fiducial markers and components is displayed on the window.

After the extraction of the contents of the BOM file, the fiducial detection is ready to start. The button ”Locate fiducial” starts the detection. The image from the camera is cropped at an area of interest around the expected location of the fiducial marker, and the actual location is being identified by applying the Hough transform. This is repeated for all the fiducial markers, and their locations are reported under the image. When all the fiducial markers are detected, the coordinates are translated and the calculated offset and rotation appear underneath the image as well.

Upon the successful visual identification of the board, the motor system is ready to be started. The button ”Start motors” establishes the communication with the motors’ controller via the serial port. After that, the motors are enabled and the homing function is called; that ensures that a common reference will be used at every run of the system.

Finally, the ”START” button moves the motors so that the holder is facing the place where the board should begin and the position is reset; that way, the motors’ and board’s point (0,0) are equivalent. Right after that, the mo- tor system starts to navigate to the (earlier translated by the vision system) coordinates and place the component.

(35)

Figure 16: The GUI gives a preview of the the image captured by the camera.

The .csv BOM file can be loaded from the top right dialog. Below the image, the results of the fiducial detection are displayed, followed by the calculated rotation and offset.

(36)

4 Testing and evaluation

The experiments conducted on the final implemented system gave satisfactory, tangible results. Generally speaking, the system was able to place a component in desired locations, after accounting for board displacement through the vision subsystem, with a measured repeatability of at least 30 successful placements for each of the 6 different target locations on the test board.

The accuracy of the motors was generally proven to be higher than 0.7 mm, provenly serving successfully the present system, since it deals with rel- atively large components (PTH components) which have equally spacious pin holes. The vision system reached an accuracy of 0.6 mm, and was equivalently important to the successful component placement, managed to provide prac- tically the exact location of the placement, even when a relatively significant displacement had taken place.

4.1 Motor accuracy

The first step towards measuring the accuracy of the motors that was achieved, is to convert the number of pulses counted by the motor’s driver circuit to a distance measuring unit, conveniently directly to mm.

The motor system is powered by stepper motors in open loop. The motors’

step size is 0.9 and the system is belt-driven, with the belt’s pitch being 3 mm and the pulleys having 10 teeth. Also, a micro stepping of 1/32 has been implemented to increase the accuracy to the necessary levels. So the number of steps per mm can be calculated as follows [43]

nstep= (360Mo

s ) ·f1

m

p · Nt

(1) where:

Ms: The motor’s step angle fm: Microstepping factor

p: Pitch of the belt Nt: Number of teeth on the pulley Therefore, (1) would give

(3600.9oo) · 1/321

3mm · 10 = 426.6667 pulses per mm (2) By entering this number in the driver’s controller, it was possible to get an accurate placement of the placer controlled by the motors in mm. That theoretically gives a resolution in the scale of µm, whereas in reality, as found experimentally, the response becomes rather unpredictable below 0.7 mm, which is enough for the scope of the project. The following table contains the results of the measurements:

(37)

Distance (mm) Pulses Expected Ratio Error RMSD

10 4307 4260 431 47 175,167

20 8588 8520 429 68

30 12864 12780 429 84

40 17141 17040 429 101

50 21435 21300 429 135

60 25722 25560 429 162

70 30019 29820 429 199

80 34313 34080 429 233

90 38599 38340 429 259

100 42868 42600 429 268

110 47156 46860 429 296

120 51373 51120 428 253

130 55639 55380 428 259

140 59888 59640 428 248

150 64166 63900 428 266

160 68384 68160 427 224

170 72622 72420 427 202

180 76825 76680 427 145

190 81019 80940 426 79

200 85258 85200 426 58

210 89502 89460 426 42

220 93749 93720 426 29

230 97985 97980 426 5

240 102211 102240 426 -29

250 106442 106500 426 -58

260 110684 110760 426 -76

270 114915 115020 426 -105

280 119126 119280 425 -154

290 123342 123540 425 -198

300 127555 127800 425 -245

Table 2: Motor pulses measurements

(38)

Figure 17: Plot of the step count to distance (mm). The linearity confirms that all along the axis, the number of steps per mm is constant (the error is too small to be visualized in this graph’s numerical context)

It is observed that there is a small deviation from the steps/mm ratio that was theoretically calculated, and this is most definitely assigned to mechanical imperfections and of course to the motors operating in open loop. By taking a closer look to the error and plotting it, it is possible to extract the information about the threshold beyond which, the precision becomes unclear.

Figure 18: The deviation from the expected pulse count to distance (mm) vi- sualized. It is shown in the graph that for up to 0.7 mm distance, it is sure that the motors will reach the desired location. When entering an area where accuracy more than 0.7 mm is required, the behavior cannot be predicted with sound precision

It is therefore concluded that for movements up to 0.7 mm, the motors are placed accurately and as they should. Given that the system was built to place PTH components, that can have a pin spacing of more than 1 mm, something that is reflected to their assigned pin holes (as the pins are also relatively big and thick), the achieved accuracy was satisfactory, and led to successful placements.

(39)

4.2 Vision system accuracy

The vision system worked as intended with an error low enough to enable ac- curate location of components during sessions with displaced boards. Since the distances are relatively small, a very slight displacement would send the place to completely wrong location. The tests before the measurements showed that after the vision system was completed and produced its output, which is the translated coordinates of the component(s), the initial displacement had been largely compensated.

Figure 19: The blue circle indicates where the components center should had been, had the board been placed properly. The green circle shows where the same point had reassigned on the coordinate system, after accounting for the observed displacement, using the fiducial detection and rigid transform

To quantify the achieved by the vision system accuracy, the methodology remained the same in principle; by running tests again for various placements of the board, a certain component’s location has been observed. The actual location of the object was registered, as well as the location that the coordinate translation pointed to. Having the observed and expected results at hand, the computation of the Root Mean Square Deviation (RMSD) allowed a sound approximation of the vision system’s accuracy.

The RMSD is calculated as follows:

RM SD = s

Pn

p=1(at− bt)2

n (1)

Where n is the number of data points, at is the observed value (the result of the coordinate translation) and bt is the predicted value, or, better, where the center of this component is actually located on the picture.

(40)

What is measured here is how close to the actual location of the component the translation will put the target location; the coordinates translation is ul- timately the reason behind the implementation of the whole vision system in the first place. Therefore, and under different placements that lead to different translations, the mean deviation from the point of interest was calculated.

The measurements were taken for 10 different cases, or different misplace- ments and for 4 different component locations for each placement, leading to 40 measurements. Upon identifying where exactly the component’s center is located, the vision algorithm allowed to run, locating the fiducials, performing the rigid transform and translating the coordinates. Ideally, the actual and measured locations should match. To identify up to which level this is true, the distance between the expected (actual) and real (measured) points was simply calculated using the Pythagorean theorem and noted down as follows:

d =p

(x2− x1)2+ (y2− y1)2 (2)

Figure 20: The red dot indicates the actual center of the component, whereas the green circle is the output of the vision system, placing the component with a slight offset. To measure the distance showed with the black arrow, the distance between the red dot and the center of the green circle was considered under different placements and for different components

(41)

x (expected)

y (expected)

x (measured)

y (measured)

Error

(distance) RMSD

AQ22

1 1069 836 1070,6 836,58 3,01 3,4547

2 1024 827 1024,4 828,4 2,58

3 1080 835 1081,6 836,3 3,65

4 1065 809 1066,7 809,06 3,01

5 1023 809 1025,3 809,33 4,11

6 1024 873 1026 873,57 3,68

7 1091 870 1092,9 872,06 4,96

8 1056 855 1057,5 856,04 3,23

9 1056 824 1057,3 825,05 2,96

10 1023 826 1024,7 825,85 3,02

AQ17

11 1123 835 1124,1 836,01 2,64

12 1076 826 1077,9 827,74 4,56

13 1135 835 1135,2 835,76 1,39

14 1119 808 1120,2 808,51 2,31

15 1077 808 1078,9 808,76 3,62

16 1078 872 1079,6 873,01 3,35

17 1146 869 1146,5 871,51 4,53

18 1109 854 1111,1 855,51 4,58

19 1109 824 1110,9 824,51 3,48

20 1077 824 1078,3 825,25 3,19

AQ9

21 1176 835 1177,7 835,44 3,11

22 1129 827 1131,5 827,09 4,43

23 1187 834 1188,7 835,23 3,71

24 1172 808 1173,8 807,96 3,19

25 1132 809 1132,5 808,18 1,70

26 1132 871 1133,1 872,45 3,22

27 1199 869 1200 870,96 3,90

28 1163 855 1164,7 854,98 3,01

29 1163 823 1164,4 823,98 3,02

30 1130 824 1131,9 824,66 3,56

AQ23

31 1339 834 1341,2 833,69 3,93

32 1293 825 1295 825,09 3,54

33 1350 834 1352,2 833,58 3,96

34 1336 807 1337,3 806,29 2,62

35 1294 807 1296 806,43 3,68

36 1296 869 1296,7 870,72 3,29

37 1362 869 1363,6 869,29 2,88

38 1326 853 1328,2 853,37 3,95

39 1326 823 1328 822,35 3,72

40 1294 823 1295,4 822,84 2,49

Table 3: Coordinates translation measurements table. All numbers are in pixels

(42)

Therefore, using (1) on the data points of the above table, the Root Mean Square Error is calculated to be exactly 3.4547. This number is expressed in pixels, and means that the translated coordinates indicate the desired location with about 2.4 pixels error. Given the fixed location camera and the fixed focal length of the lens, it was estimated and used throughout the whole testing and implementation of the vision system, that for this setup, 1 mm of real-world distance accounts for pixels. So, bringing the RMSD to real-world units, the mean deviation of the translated coordinates from the real location can be up to 0.6127 mm for a fixed distance to the camera.

(43)

5 Challenges faced

Some issues were faced towards the implementation of the current system. They can be categorized as issues along the motor subsystem development and ac- cordingly for the vision subsystem development stage. The issues were related to the motor positioning error as well as the initial approach to writing an own C library that would take care of every action needed, including the communi- cation, framing, timing etc. As far as the vision subsystem is concerned, the main issue was transferring and demonstrating the same results from MATLAB to OpenCV, using C++, as the programming level difference was noticeable.

Furthermore, the fiducial detection’s accuracy is prone to light conditions and the size of the picture’s area of interest that the algorithm is been ran.

5.1 Motor subsystem

The first attempt to control the motors was to communicate to their controller using an all-inclusive, own-written C source file. Actual communication was indeed possible using that approach, but problems were quick in arising. The main problem was taking care of the communication timing. Modbus, being a UART communication protocol, requires a certain, very specific, amount of time in-between frame transmissions, as explained in detail in Section 2.1. This could not be easily achieved strictly from the developer’s side, without going down to the hardware level and utilizing the serial port’s drivers directly; fur- thermore, the frame creation and transmission of each and every frame needed to be handled explicitly by the programmer, something that was proved to be cumbersome. To robustly overcome this type of hindering, the open-source li- brary libmodbus was introduced to the design (3.2.2). Libmodbus eliminates these problems by taking care of the serial communication down to the hardware level, and gives to the developer functions that allow him or her to write to and read from the slave device’s registers, convert decimal values to hexadecimal and order the MSB and LSB correctly for Modbus communication and elimi- nates the problem of timing. The implementation of a shared Modbus library was therefore proven necessary in order for the project to be completed within a reasonable time frame.

Moving forward from software issues but still within the motor’s scope, fur- ther problems came along when the motor system moved to the mechanical table. The positioning accuracy was accurate only up to a point and certain nonlinearities along the track of the motors were discovered when measuring the positioning. The results are presented in full length and detail under Section 4.1. The positioning error was fully calculated and proved to be non capable of introducing hindrance to successful component placing, due to the size of PTH components and their corresponding pin holes. Testing the motor system from a software perspective, as well as investigating the motors’ revolutions accord- ing to given commands when they were not attached to the mechanical table, pinpoints the pulses/revolutions nonlinearities to be of specifically mechanical nature, such as on the belt and pulley system that allowed the placer to move

References

Related documents

The ambiguous space for recognition of doctoral supervision in the fine and performing arts Åsa Lindberg-Sand, Henrik Frisk & Karin Johansson, Lund University.. In 2010, a

Stöden omfattar statliga lån och kreditgarantier; anstånd med skatter och avgifter; tillfälligt sänkta arbetsgivaravgifter under pandemins första fas; ökat statligt ansvar

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Re-examination of the actual 2 ♀♀ (ZML) revealed that they are Andrena labialis (det.. Andrena jacobi Perkins: Paxton & al. -Species synonymy- Schwarz & al. scotica while

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating