• No results found

Application of 3D reconstruction by stereo vision for the purpose of assessing weld quality, The

N/A
N/A
Protected

Academic year: 2021

Share "Application of 3D reconstruction by stereo vision for the purpose of assessing weld quality, The"

Copied!
120
0
0

Loading.... (view fulltext now)

Full text

(1)

THE APPLICATION OF 3D RECONSTRUCTION BY STEREO VISION FOR THE PURPOSE OF

ASSESSING WELD QUALITY

by Andrew M. Neill

(2)

© Copyright by Andrew M. Neill, 2016 All Rights Reserved

(3)

A thesis submitted to the Faculty and the Board of Trustees of the Colorado School of Mines in partial fulfillment of the requirements for the degree of Doctor of Philosophy Engineering Systems (Engineering Systems). Golden, Colorado Date Signed: Andrew M. Neill Signed: Dr. John P. H. Steele Thesis Advisor Golden, Colorado Date Signed: Dr. Gregory Jackson Professor and Head of Mechanical Engineering Department of Mechanical Engineering

(4)

ABSTRACT

Welding is one of the most integral and fundamental technologies enhancing the quality and safety of our lives today. Applications range from mining to construction to automotive manufacturing, to aerospace, and most are being pushed toward higher quality standards as well as toward the adaptation of automated welding systems. Safety, cost, performance, and reliability are driving the need for faster, higher performance systems in the welding industry. There is an increasing demand for better quality monitoring and process control, especially in automated welding systems where the sensing ability and adaptive skill of an experienced human welder are no longer in the loop. Several technologies exist to monitor welding process waveforms, weld joint geometry, and provide post-process weld inspection but one of the most important goals for modern welding systems is quality monitoring and process control to achieve the desired geometry and weld quality while the weld is being made, that is, online. This would more information available to an automated welding system, enable real-time quality monitoring, and facilitate the development of adaptive closed-loop control of the welding process. Weld sensing technologies available today do not provide measurements of the three dimensional geometry during the welding process sufficient to support these capabilities.

This thesis describes the development of a stereo vision system capable of providing near real-time scaled three dimensional reconstruction of a portion of the weld pool and the surface of the deposited weld bead. A pair of cameras have been mounted on a welding robot in an eye-in-hand configuration. The cameras have been calibrated to high precision and used to collect sequences of images from the welding process. These images were then rectified for stereo matching, filtered, and passed through four stereo correspondence algorithms to evaluate the algorithms for efficacy and feasibility. The results from the stereo correspondence were then used to construct a three-dimensional model of the weld bead features to a resolution of approximately 1 millimeter. The results presented in this thesis provide scaled weld pool reconstruction with a level of speed and detail that improve on the capability of current technology and establish a baseline for further development of automated welding systems. Analysis of errors, speed of calculation, and limitations of the process are included. Recommendations for future investigations based on the findings of this research are also provided.

(5)

TABLE OF CONTENTS

ABSTRACT . . . iii

LIST OF FIGURES . . . viii

LIST OF TABLES . . . xi

ACKNOWLEDGMENTS . . . xii

DEDICATION . . . xiii

CHAPTER 1 INTRODUCTION AND MOTIVATION . . . 1

1.1 Background . . . 2

1.1.1 Welding Fundamentals . . . 2

1.1.2 Common welding terms and definitions . . . 2

1.1.3 Gas Metal Arc Welding . . . 3

1.1.4 3D Scene Reconstruction from Stereo Vision . . . 4

1.2 Objectives . . . 5

1.3 Motivation . . . 5

1.3.1 Weld Process Standards and Weld Inspection . . . 7

1.3.2 The Welding Environment . . . 8

1.4 Limitations of Scope . . . 9

1.5 Novelty of this work . . . 10

1.6 Outline of Dissertation . . . 10

CHAPTER 2 REVIEW OF RELATED WORK . . . 11

2.1 Welding Fundamentals . . . 11

2.1.1 Pertinent welding codes and inspection criteria . . . 12

2.2 Modeling of the welding process and weld pool. . . 13

2.3 Weld Process Monitoring . . . 15

2.3.1 Weld Seam Contactless 3D Analysis by Line Scan . . . 15

(6)

2.3.3 Weld Pool 3D Monitoring and Analysis by Stereo Vision . . . 18

2.3.4 Weld Pool 3D Monitoring by Structured Light . . . 19

2.4 Three Dimensional Scene Reconstruction and Depth Measurement Techniques . . . 19

2.5 Stereo Vision Algorithms . . . 21

2.5.1 Fundamentals of Stereo Vision . . . 21

2.5.2 Survey of Stereo Vision Correspondence Algorithms . . . 22

2.5.3 Block Matching Algorithm . . . 23

2.5.4 Semiglobal Block Matching Algorithm . . . 23

2.5.5 Bilateral Subtraction . . . 24

2.5.6 Scene Flow by GCSF . . . 25

2.6 Conclusions . . . 25

CHAPTER 3 MODELING AND SIMULATION OF THREE DIMENSIONAL WELD POOL RECONSTRUCTION BY STEREO VISION . . . 26

3.1 Abstract . . . 26

3.2 Introduction . . . 27

3.3 Related Work . . . 28

3.4 Modeling . . . 28

3.4.1 Camera Mounting Model . . . 29

3.4.2 Simulation of Images . . . 29

3.4.2.1 Model for Simulation . . . 29

3.4.2.2 Projection of Model onto Stereo Image Pair . . . 30

3.5 Comparison of Stereo Algorithms with Synthesized Images . . . 33

3.6 Analysis of Results . . . 35

3.7 Conclusions . . . 38

3.8 Chapter Post Script . . . 38

CHAPTER 4 EXPERIMENTAL APPROACH AND METHODOLOGY . . . 40

4.1 Imaging System Development . . . 40

(7)

4.1.2 Basler Cameras . . . 42

4.1.2.1 Lenses . . . 42

4.1.2.2 Image noise . . . 43

4.1.2.3 Image Synchronization . . . 44

4.2 Calibration and Image Rectification . . . 46

4.2.1 Intrinsic Calibration and Camera Matrices . . . 46

4.2.2 Lens Distortions . . . 48

4.2.3 Extrinsic Calibration . . . 49

4.2.4 Project Specific Calibration Procedure . . . 50

4.2.5 Rectification and Verification/confirmation of calibration . . . 52

4.3 Welding Process Equipment . . . 55

4.3.1 Fanuc Robot . . . 55

4.3.2 Welding System . . . 56

4.3.3 Data Acquisition . . . 56

4.4 Data Collection . . . 57

4.4.1 GMAW Process Parameters . . . 57

4.4.2 Design of Experiments . . . 58

4.4.3 Imaging . . . 59

CHAPTER 5 STEREO VISION METHODS FOR WELD POOL THREE DIMENSIONAL RECONSTRUCTION . . . 61

5.1 Programming . . . 61

5.1.1 Correspondence Characterization . . . 62

5.1.2 Pre-processing . . . 63

5.1.3 Perspective Rectification . . . 64

5.1.3.1 Calculating the Perspective Transform . . . 64

5.1.3.2 Removing the Perspective Rectification to Calculate the Actual Disparity . . . 66

5.1.4 Post-processing . . . 67

(8)

5.2.1 Explanation of Most Influential Parameters . . . 68

5.2.2 Parameter Tuning . . . 69

5.3 Conclusions . . . 70

CHAPTER 6 WELD SCENE RECONSTRUCTION RESULTS . . . 71

6.1 Bead On Plate Welds Correspondence Algorithm Comparisons . . . 71

6.2 Final Weld Reconstruction Results . . . 78

6.3 Results of Point Cloud Measurements . . . 78

6.4 Processing times . . . 83

6.5 Analysis and Discussion . . . 85

6.5.1 Discussion of stereo algorithms . . . 86

6.5.2 Identification of flaws . . . 87

6.6 Conclusions . . . 87

CHAPTER 7 DISCUSSION AND CONCLUSIONS . . . 88

7.1 Conclusions . . . 88

7.2 Suggestions for Future Work . . . 90

REFERENCES CITED . . . 94

APPENDIX A - CAMERA CALIBRATION PROCEDURES AND BEST PRACTICES . . . 98

APPENDIX B - DATA . . . 100

B.1 Completed Welds . . . 100

B.2 Collected Data Values . . . 100

(9)

LIST OF FIGURES

Figure 1.1 Groove and Bead on Plate Weld Geometry . . . 3

Figure 1.2 Fillet Weld Geometry . . . 3

Figure 1.3 GMAW Process . . . 4

Figure 1.4 ServoRobot Power-Cam weld inspection and measurement system (figure from Servo-Robot Inc.) . . . 6

Figure 2.1 Recreation of Ma and Zhang’s model. . . 13

Figure 2.2 Figure from Hu, Guo and Tsai’s paper. . . 14

Figure 2.3 Control by line scan, figure from . . . 16

Figure 2.4 2D Features of interest, figure from Zhang et al. . . 17

Figure 2.5 Biprism stereo techniques using a prism, left, and mirrors, right. Figures from Zhang et al.; Zhao et al. respectively . . . 18

Figure 2.6 Structured light 3D reconstruction technique, figure from Song & Zhang . . . 19

Figure 3.1 Ellipsoid model of weld pool and weld bead with ripples . . . 30

Figure 3.2 Image of actual weld pool taken in our lab . . . 32

Figure 3.3 Synthesized Image Pair . . . 32

Figure 3.4 Block Matching Disparity Map . . . 34

Figure 3.5 Bilateral Subtraction Disparity Map . . . 35

Figure 3.6 Semiglobal Block Matching Disparity Map . . . 36

Figure 3.7 Cross-Scale Aggregation with census and gradient cost computation, bilateral filter cost aggregation, and segment based post processing . . . 36

Figure 3.8 World model reconstruction using Block Matching disparity map . . . 37

Figure 4.1 Camera mounting system model . . . 41

Figure 4.2 Camera mount . . . 41

Figure 4.3 Image noise - fixed pattern noise . . . 43

Figure 4.4 Image noise - random noise . . . 44

(10)

Figure 4.6 Image synchronization test . . . 45

Figure 4.7 Lens optics and pinhole camera model . . . 47

Figure 4.8 Calibration points finding algorithm . . . 51

Figure 4.9 Epipolar geometry on the Tsukuba image pair . . . 52

Figure 4.10 Epipolar geometry check . . . 53

Figure 4.11 Reconstruction test: washer image pair . . . 53

Figure 4.12 Reconstruction test: washer disparity image . . . 54

Figure 4.13 Reconstruction test: washer point cloud points. Dimensions in millimeters . . . 54

Figure 5.1 Correspondence characterization program . . . 63

Figure 5.2 Perspective rectification: input images . . . 65

Figure 5.3 Perspective rectification: output images . . . 66

Figure 5.4 Block matching tuning program . . . 69

Figure 5.5 SGBM tuning program . . . 70

Figure 6.1 Raw image pair for comparison . . . 71

Figure 6.2 Rectified image pair for comparison . . . 72

Figure 6.3 Perspective filtered image pair . . . 72

Figure 6.4 Perspective filtered image pair normalized unfiltered disparity maps . . . 73

Figure 6.5 Perspective filtered and thresholded image pair . . . 73

Figure 6.6 Perspective filtered and thresholded image pair normalized unfiltered disparity maps . . . 74

Figure 6.7 Normalized Disparity Maps after WLS Filtering . . . 75

Figure 6.8 BM point cloud reconstruction . . . 76

Figure 6.9 SGBM Point Cloud Reconstruction . . . 77

Figure 6.10 Bead On Plate weld 4 smoothed reconstruction results for easier visualization . . . 79

Figure 6.11 Fillet Weld 9 smoothed reconstruction results . . . 80

Figure 6.12 Graph of Bead on Plate weld widths comparison . . . 82

Figure 6.13 Graph of Bead on Plate weld height comparison . . . 82

(11)

Figure B.1 Bead on Plate Weld 1 . . . 100

Figure B.2 Bead on Plate Weld 2 . . . 100

Figure B.3 Bead on Plate Weld 3 . . . 101

Figure B.4 Bead on Plate Weld 4 . . . 101

Figure B.5 Bead on Plate Weld 5 . . . 101

Figure B.6 Bead on Plate Weld 6 . . . 102

Figure B.7 Bead on Plate Weld 7 . . . 102

Figure B.8 Bead on Plate Weld 8 . . . 102

Figure B.9 Fillet Weld 1 . . . 103

Figure B.10 Fillet Weld 2 . . . 103

Figure B.11 Fillet Weld 3 . . . 103

Figure B.12 Fillet Weld 4 . . . 103

Figure B.13 Fillet Weld 5 . . . 103

Figure B.14 Fillet Weld 6 . . . 104

Figure B.15 Fillet Weld 7 . . . 104

Figure B.16 Fillet Weld 8 . . . 104

Figure B.17 Fillet Weld 9 . . . 104

Figure B.18 Fillet Weld 10 . . . 105

Figure B.19 Fillet Weld 11 . . . 105

Figure B.20 Fillet Weld 12 . . . 105

(12)

LIST OF TABLES

Table 3.1 Basic Reconstruction Dimensions . . . 38

Table 4.1 Reconstruction test: dimension comparison . . . 55

Table 4.2 ARC Mate® 100iB specifications . . . 55

Table 4.3 Lincoln Electric Power Wave®455 specifications . . . 56

Table 4.4 Gas Metal Arc Welding parameters . . . 58

Table 4.5 Design of Experiments - bead on plate welds . . . 58

Table 4.6 Design of Experiments - fillet welds . . . 59

Table 4.7 Imaging parameters . . . 60

Table 6.1 Bead on Plate simple weld dimensions . . . 81

Table 6.2 Fillet simple weld dimensions . . . 83

Table 6.3 Program execution time . . . 84

Table B.1 Bead on Plate weld collection data . . . 106

(13)

ACKNOWLEDGMENTS

I would like to thank, first and foremost, my outstanding and ever-helpful adviser Dr. John Steele and advising committee: Dr. Bill Hoff, Dr. Douglas VanBossuyt, Dr. Kevin Moore, and member at large: Dr. Carolina Payares. They never failed to be willing, helpful, and encouraging. I couldn’t have asked for a better committee.

I am gratefully indebted to my labmates, Michael Bowman and Adewole Ayoade, for helping me with data collection, programming help, moral support, and everything else in between.

I would also like to thank Dr. Jerry Jones for providing generous insight on welding code standards and weld quality considerations, Dr. Richard Campbell for exemplary instruction on weld inspection technology, and Jesse Grantham for his support, interest, and advice.

(14)

For my parents, John and Barbara Neill.

(15)

CHAPTER 1

INTRODUCTION AND MOTIVATION

The objective of this research is to develop an innovative approach to sensing the welding process. The aim is to expand and improve the current techniques for visual analysis of the welding process, especially the weld pool. Automated welding is widely used in manufacturing and its use is steadily increasing; however monitoring and control of the welding process is still largely dependent on experienced and knowledgeable welders who must understand, interpret and “dial in the parameters” . Ideally, a welding system should be completely autonomous. To accomplish this it must receive inputs or specification on the process, initialize the process using prior knowledge, sense the resulting state, judge the status as time increases, and make changes to update the process when necessary, thereby becoming an analog to the skills of human welders. The first step in any autonomous control is proper sensing. For welding, visual three dimensional(3D) sensing, reconstruction and analysis provide the rich and direct information required to assess the status of the weld. This study explores a system that can apply stereo matching techniques on welding images to quickly and reliably generate 3D reconstructions of the weld pool. The research also looks at techniques to find the weld pool boundary, geometric shape of the weld bead, and analyze the weld quality. The overall goal of this research is to develop more informative sensing techniques and understanding of the welding process to advance the adaptive control of the welding process. To our knowledge this will be the first exploration into the use of stereo 3D reconstruction for diagnostics.

This research project will expand and improve the current techniques for sensing and analyzing the welding process through 3D reconstruction of the weld pool using stereo vision. The objective of this research is to refine an innovative approach to sensing the welding process and expand the understanding of the capabilities of such a system especially in the application of real-time adaptive weld process control. To date, most weld process sensing systems are either two dimensional or low-fidelity 3D (Wang, 2014). To be useful to academia and industry, the system will have to be accurate, precise, real-time, low-cost, robust, simple, and easy to use. The project will use a system that uses stereo matching techniques but with robust and reliable imaging practices to quickly and reliably generate 3D reconstructions of the weld pool and bead construction. The overall goal of this research is to systematically explore stronger sensing techniques and provide better understanding of the welding process to advance the adaptive control of welding.

(16)

1.1 Background

Welding is one of the most prevalent processes in modern manufacturing where joining of metals is required. It is a complex process with many variations. Welding is held to high standards to ensure that resultant welds are capable of supporting loads for which they are designed. There are several parameters that can be monitored during and after welding but most commonly the geometry of the resultant weld is what is inspected to qualify a weld. A new sensor system that could determine the dimensions during welding would be advantageous to the industry.

1.1.1 Welding Fundamentals

Welding is the process of melting metals to join them together as one contiguous component. Welding commonly uses heat supplied by an electric arc, laser or gas torch to melt the metals but can also include processes including, but not limited to, vibration welding and friction stir welding that use friction to melt and join metals together and resistance spot welding which uses an electric current to heat a small nugget of metal between two adjoining pieces of metal. The welding process is widely used to fuse metals such as structural carbon steel, stainless steel, and aluminum. The focus of this research is the study of electric arc welding on low carbon steel but the principles and procedures could be adapted to other types of welding.

When the process is performed correctly, welding holds many advantages over other joining techniques. It is difficult to disassemble once joined, but provides substantially superior structural integrity to bolting and riveting at a reduced weight. Welding is therefore used to manufacture many products ranging from small electrical components and precision chemical equipment to mining equipment and ships weighing hundreds of tons. When two pieces of metal are welded together they become one fused piece and therefore can withstand higher loads.

1.1.2 Common welding terms and definitions

Welds are categorized by the weld shape that is deposited, which is typically dependent on the config-uration of the base metals to be joined. The configconfig-uration of the base metals is referred to as the joint. In research it is common to use a bead on plate weld, as shown in Figure 1.1a. A bead on plate weld is performed on the surface of a bare plate rather than in the presence of a joint and thereby minimizes external factors that can be caused by joint geometry imperfections such as misalignment or excess opening. Bead on plate joints are rarely used in real world applications however, and instead welding is used in joints including butt joints, t-joints, and lap joints. Butt joints, created by two parallel faces, are most commonly seen in pipe welding applications where two pipe sections meet. The butt joint is often prepared by cutting a bevel or similar type of geometry to improve access to the full depth of the joint. A groove weld, as shown in

(17)

(b) Groove Weld (a)Bead-On-Plate Weld Reinforcement Toe Fusion Zone Weld Metal Base Metal Weld Metal Toe Fusion Zone Base Metal

Figure 1.1: Groove and Bead on Plate Weld Geometry

Figure 1.1b, would then be used to weld the joint together. A fillet weld would be used to weld t-joints and lap joints. Fillet welds account for the vast majority of structural welds. The fillet weld joint occurs when two faces intersect at approximately 90º. The weld is then laid in the intersecting corner of the two faces. This will be the primary type of weld studied in this project. This research will use bead on plate welds for proof of concept, and lap joint fillet welds, as shown in Figure 1.2, for more practical welds.

(b) Convex Fillet Weld

Leg Length

Toes

(c) Concave Fillet Weld

Leg Length

Toes

Leg Length (Horizontal) Leg Length (Horizontal)

Root

Throat Throat

Root

(a) Lap Joint Fillet Weld

Base Metal Base Metal Weld Metal Fusion Zone (Vertical) (Vertical)

Figure 1.2: Fillet Weld Geometry

1.1.3 Gas Metal Arc Welding

The focus of this research is in Gas Metal Arc Welding (GMAW). GMAW can be further classified as a metal inert gas (MIG) or metal active gas (MAG) process. In this study an inert welding gas, Argon and

(18)

Carbon Dioxide mixture, will be used putting it into the MIG welding category. The fundamental process of GMAW involves an electric arc, consumable electrode, and shielding gas, as shown in Figure 1.3. High

Figure 1.3: GMAW Process

current, usually 100-400 amps and moderate voltage, usually 15-30 volts, is applied to a contact tube. The contact tube conducts the electricity into the electrode, which maintains the arc over the weld pool. The electrode is constantly fed out and consumed in the welding process. The heat from the arc in addition to the metal transfer results in a melt pool of metal from the base material as well as material that has melted from the electrode. The arc is surrounded and shielded from the ambient air by a constant flow of shielding gas from the nozzle surrounding the contact tube.

GMAW is an industry standard for arc welding. It provides high deposition rates and can be used to weld metals from thin sheet metals to joints several inches thick. It is advantageous for automated processes because it is a relatively clean welding technique and requires no post-process activity, e.g. grinding or brushing. Many processes leave a layer of protective slag on top of the weld bead that must be chipped off after welding. GMAW uses a shielding gas and produces no slag. When the weld is done property it can therefore be sent down the line or additional weld passes may be made immediately.

1.1.4 3D Scene Reconstruction from Stereo Vision

Images provide a great deal of information about a process. Digital images are typically composed of thousands of pixels which can be related to features of the scene being imaged. Stereo imaging is a technique of acquiring two images of the same scene from slightly different perspectives. The differences in the two images can then be used to determine depth.

Stereo vision is a complex technique for determining depth within a scene and yields high resolution information. When a welder observes the welding process, he or she is typically using the stereo images from his or her eyes to monitor the size, shape, and curvature of the weld pool. This sensing system provides the

(19)

feedback information about the process necessary to control the weld. A computer is capable of acquiring a pair of images and converting those images to a depth measurements but it can be challenging and time consuming. Several stereo correspondence algorithms are available with varying capabilities to make these depth calculations.

This research project will use stereo correspondence algorithms to create a depth map reconstruction of the welding process, as viewed from a stereo camera setup. Once depth calculations are made, the profile of the weld can be estimated and even measured. This profile can be used to estimate the pertinent dimensions given in Figure 1.1 and Figure 1.2.

1.2 Objectives

The primary measure of a weld is its mechanical properties, which are not accurately represented by the industry standard measures of voltage, current, and other types of waveform sensing. The mechanical properties of interest are fill height, width, toe angle and other such geometry that can be determined by looking at the weld itself, but these measurements are typically taken after the weld is completed and cooled to room temperature. The objective of this project is therefore to develop a sensing system to observe the primary result and mechanical features of interest of welding, which is the weld geometry, through imaging during the welding process rather than using secondary results of the welding process such as voltage feedback. This involves observing the weld pool, the freeze line between the weld pool and weld bead, and the resultant weld bead. The objectives of the project can be summarized as follows:

1. Use stereo imaging techniques to create 3D reconstructions of the weld pool while welding. 2. Create high-speed 3D reconstruction of the weld bead using the trailing edge of the weld pool. 3. Analyze the capability to assess weld joint quality from results of tasks 1 and 2.

1.3 Motivation

Much of the welding done in manufacturing is performed by automated welding systems but these sys-tems are usually blind to the world around them, including to the welding process. These machines rely almost entirely on proper programming by the operator and very precise tolerances to produce the required weld. The only commercially available weld sensing systems are limited in their capabilities. These include technologies such as the ServoRobot seam finding and seam tracking systems and weld inspection systems such as the Power-Scan system shown in Figure 1.4 (Servo-Robot Inc., 2016), Through Arc Seam Tracking (TAST)(FANUC Robotics, 2005), and tactile sensing(Abb, 2012). These systems give the robot the ability to stay within the welding joint in spite of inconsistencies between programmed points and part geometry and some can perform post-weld inspection but none can provide the precise real-time control of an experienced

(20)

Figure 1.4: ServoRobot Power-Cam weld inspection and measurement system (figure from Servo-Robot Inc. (2016))

human welder.

Welding is a complicated process and takes years for a person to develop the necessary skill to produce quality welds. Even with the most experienced welders though, consistency is never guaranteed. The welding codes dictate an allowable level of defects in the weld because of this (ASME, 2010; AWS, 2015). These inconsistencies can be caused by any number of factors including but not limited to poor fit up, imprecise or inadequate joint preparation, inconsistency in the welding consumables, and even welder fatigue. The operator then controls the weld process and must constantly control and adjust the parameters to maintain a quality weld bead. Inevitably the human error brought on by improper control and often exacerbated by inconsistencies in the weld set up causes minor or even major defect in the welding process. The goal of every welder is to minimize these defects to stay within the allowable range set by the welding code.

The welding industry is faced with a shortage of experienced welders in spite of the steady demand. The Bureau of Labor and Statistics reports that in 2014, 397,900 people were employed as welders, cutters, solderers and brazers (U.S. Bureau of Labor and Statistics, 2015). A recent study published by the National Center for Welding Education and Training illustrates an industry wide deficit of approximately 500 welders per year (Derwart et al., 2008). Many manufacturers are also trying to bring their manufacturing back to the United States but are faced with higher labor costs.

Of all the standard costs of the welding process, labor, consumables, machine cost, and energy cost, the labor costs are by far the highest for manual welding, usually at around 80-90% (Weman, 2003). This can be greatly reduced by mechanization, but at the expensive of adaptive control.

(21)

1.3.1 Weld Process Standards and Weld Inspection

Arc welding requires a great amount of skilled control and precision to be performed well. The image most people have of a standard, high-quality weld bead is a laid over stack of dimes. This simple geometry is made by moving the torch with millimeter precision in not only vertical and horizontal location within the weld seam but also height and it must be controlled at a constant speed. A change in the welding speed is accompanied by a change in material deposition and heat input. Weld specifications are designated as a size of the completed weld. For example a fillet weld will simply be a 1/4 inch fillet weld, meaning the leg length of the weld must be 1/4 inch or larger, as shown in Figure 1.2. It is then up to the welder to decide the optimal parameters to meet this requirement and control the weld to create this geometry. During the process, the welder must be able to assess the current state of the weld, primarily based on the appearance of the weld pool, and make controlled motion adjustments based on his or her knowledge of the process, all while maintaining the welding arc and necessary angles, locations, and speed. For example, while making a fillet weld, the welder must move smoothly but quickly along the length of the weld, while keeping within about 5° of a 45° work angle and 10º push angle, and within 5 mm of a 20 mm contact tube to work distance. At the same time the welder must adjust travel speed and weaving motions as necessary to continuously and smoothly build up the deposited metal to the specifications of the weld. The welder then has to adapt to different weld seam geometries and often adapt to different parts as well. Learning to properly perform all of these micro and macro adjustments takes years of training to master.

The other technique is to use an mechanized process in which the locations are precisely controlled by a robot or other device. The welding parameters are then tightly maintained through the length of the weld but usually not updated without additional expensive equipment. Thus, an automated system relies on “dead reckoning” which must be performed in very strict environments and programmed by knowledgeable operators to consistently produce quality welds.

In many cases the weld must be made with very tight tolerances. Higher standards are being required to keep up with reduced design tolerances, industry competition, aesthetic demands and government regula-tions. Welding shops will often spend a great amount of time developing the process to ensure the weld can meet the design requirements. For instance, if an engineer specifies a 1/2 inch groove weld on a high strength steel the welding shop will often have to create several experimental welds to make sure that complete joint penetration is achieved, the weld reinforcement is within the allowable range, and no discontinuities are observed.

Then, the weld will be cut into strips, machined, and bend tested, tensile tested or both. For very integral welds they may even be X-rayed. These tests systematically ascertain the ability of the weld to hold the

(22)

required loads. Finally, a report of the procedure is made, often referred to as a Procedure Qualification Record (PQR). A Weld Process Specification (WPS) is designed based on the PQR and this WPS is given to the production floor to specify the required welding parameters to perform the weld.

An inspector is sometimes required to qualify crucial welds before, during, and after welding. The inspector must check preparation for welding including all pertinent dimensions of the joint preparation, process parameters, environment, etc. The inspector can then be required to monitor the welding process for discrepancies from the WPS. After welding, the inspector will check the size, length, and location of all welds, locate any flaws, and report findings quickly and concisely. The inspector will often be required to check each pass of a multi-pass weld to make sure there are no inclusions or other flaws that will be covered by subsequent weld passes. This increases the required time of welding but also leaves room for error if the inspector misses something.

1.3.2 The Welding Environment

Welding is a dangerous, dirty, and often times dull task. Assembly floors often weld hundreds of identical parts in the same location every day. Arc welding is a slow process and requires concentration, but when done repetitively can quickly become a monotonous and tedious task. Welding produces slag and fumes making it a potentially unhealthy process.

Welding is also a dangerous process. Welding is done with incredible amounts of heat which emit ultraviolet light. This is often referred to as “arc flash” and can cause skin burn similar to an intense sunburn, “arc eye” which manifests itself similar to conjunctivitis, and in extreme cases usually caused by extended exposure, skin cancer. Welding fumes include many metals and noxious gases including carbon monoxide, ozone, phosgene, and one of the most notable: hexavalent chromium. Prolonged exposure is known to cause various types of cancer, especially lung cancer.

All of these factors are leading to an increase in the number of automated welding work cells . Robotic systems are tireless, capable of extremely precise consistency, and are impervious to the harsh welding environment. However these automated welding systems do not have the ability to adequately sense and adapt to deficiencies in the weld process such as undercutting, overfill, and burn through. Automated systems can very tightly maintain the weld parameters but lack feedback sensing beyond indirect weld parameters such as the position, velocity, voltage and current. Weld seam locating can be done using voltage and current signals and or laser line tracking cameras, but requires very expensive tracking systems. Even with the most advanced automated welding systems, the parameters must be assessed and often adjusted by an experienced welding engineer to produce the required weld.

(23)

The ultimate goal of robotic welding is to present a robot with a part to be welded and some designation as to where the welds should be. Then the robot would assess the welds for location, weld size, and fit up characteristics and begin to weld based on nominal weld values. Based on the weld performance, the robot would then adjust the process parameters as necessary to produce a weld as close to the weld specifications as possible. In this way, the robot would be as capable a welder as a human and human exposure to this dull, dangerous, and dirty line of work could be minimized.

Currently sensing is one of the largest limitations in the robotic welding system. Robots used for welding are capable of very precise and highly repeatable motion. The operator programs in a series of points and usually the robot is capable of precise repeatability, on the order of 0.5 mm, in reaching these points. The robot can then update these points with existing sensing systems, such as touch sensing and line scans, to adjust the path to suit the location of the weld seam. Additionally, the weld can be adjusted while welding. If a gap changes continuously sensors exist to allow the robot to sense the edge of the gap through the welding process and thereby know where to weave back and forth while welding. If the arc deviates from preset acceptable criteria the welding power supply or welding control system will shut down the welding power and cause a fault status in the robotic system. This will generally pause the welding process and alert an operator that attention to the cell is required.

A qualified inspector must also be present to ensure that the welds meet the specifications of the engineer, just as is described in Section 1.3.1. Ideally, the robot would be able to qualify it’s own welds in process. In order to accomplish this, the robot would have to be able to measure the weld profile. If an issue was found before the weld was completed, the robot could shut down the process and minimize rework.

1.4 Limitations of Scope

The work done towards this research will be focused, but also limited. All welding will be done in constant voltage GMAW welding process rather than a pulsed GMAW or GTAW process. These are other areas of research that should be explored in future work as they are active areas of interest in the research community. The welding will be performed in a laboratory environment with minimal outside perturbations, providing ideal situations for proof of concept data.

The analysis of the welds will be fundamental geometrical and basic qualitative analysis. Metallurgical analysis will be limited to observation of features such as cracking and porosity rather than extensive studies of micrographs for detailed metallurgical features such as grain microstructure and formation, which could be the basis of further research.

Modeling will be accomplished using established and available models for the purpose of validation of results. Because there are no readily available techniques for measuring the details of the weld pool and

(24)

providing independent measurement beyond the basic curvature techniques used by Liu & Zhang (2015), the weld pool reconstruction will be validated by model estimation and comparison with dimensions of the cold weld bead.

1.5 Novelty of this work

The original contributions of this work include:

• Exploration and assessment of several stereo matching algorithms for application to online GMAW 3D weld pool monitoring

• Development of scaled reconstruction of 3D GMAW weld pools using stereo vision

• Demonstration of quality assessment based on weld size and ability to discern qualitative information of GMAW weld pools using stereo vision and 3D reconstruction

• Results presented show the ability to reconstruct the surface of a weld pool to within 1 mm or 3/64ths in. This is providing a baseline for weld reconstruction results.

1.6 Outline of Dissertation

This thesis is organized into 7 chapters. Chapter 1 has introduced the background and motivation for the research and presented the objectives and some of the capabilities that will be achieved. Chapter 2 will discuss the related knowledge and research efforts that pertain to this work. The modeling effort and fundamental process of building a reconstructed 3D representation from a pair of images will be discussed in Chapter 3, but the process will use synthetic images. The techniques for acquiring on-line stereo images will be presented in Chapter 4. This will also include the calibration process to provide the required rectification information for processing the images. Chapter 5 will detail the stereo vision programming as applied to the real images to perform stereo correspondence for this challenging environment. Chapter 6 will then present the results of the work and discuss the capabilities of the technique. Finally Chapter 7 discusses the final results and conclusions derived from the work presented and also includes many suggestions for future work.

(25)

CHAPTER 2

REVIEW OF RELATED WORK

Weld monitoring and stereo vision techniques are two very active areas of research. This chapter will review some of the fundamental aspects of welding, active research for weld pool modeling, and several of the state of the art weld process monitoring techniques. These monitoring techniques include passive imaging techniques and active structured light techniques that are used for monitoring the weld geometry. The chapter will also review some of the fundamentals of depth measurement technology, stereo vision techniques and the algorithms that will be used in this research.

2.1 Welding Fundamentals

The process of welding is a combination of heating of metal, fluid flow within that molten pool, and cooling and solidification that creates the final bond to join the pieces together. The fluid flow and solidification that occurs following heating is one of the most important aspects of welding, and proper monitoring and control of these processes can lead to better control of the weld shape, microstructure, properties and defects (Kou, 2012). The fluid flow in welding, frequently called Marangoni flow, is driven by the differences in surface tension throughout or across the weld pool. The arc heating causes large variations in temperature in the pool which in turn induces surface tension gradients. These surface tension gradients combined with chemical factors in the weld pool such as sulfur content cause different flow characteristics in the weld pool. These flow characteristics can change the width and penetration by creating inward or outward surface flow (Kou, 2012).

Weld pool surface oscillation also accompanies Marangoni flow and is an important aspect of welding because it is often easier to measure and has been correlated to the penetration (Xiao & Ouden, 1993). In Xiao and Ouden’s paper, when reviewing oscillations of the GTAW weld pool, it was found that higher frequencies, in the 140 Hz range, could be correlated to lower penetrations and higher node count vibration modes, i.e. the weld exhibits a higher frequency across the surface of the weld which causes the waves to be more compacted. When the weld frequency decreased, to the 60 Hz range, the penetration increased and the vibration mode node count was reduced. Therefore, measuring the oscillation of the weld pool could add a very strong indicator of weld penetration depth. This correlates with water droplet behavior.

Once penetration into the weld joint has been achieved the weld begins to solidify as a result of cooling at the back side of the pool as the arc moves. The solidification begins with microstructure grain formation in the fusion zone, or interface between the molten pool and solidified metal beyond the molten pool (Kou,

(26)

2012). The solidification is largely dependent on the temperature differentials found here. This is the time period where flaws are formed and the size of the weld is determined. If the weld is too hot and spreads too widely, undercutting will occur. Volatility in the weld pool, often caused by oxidation from lack of shielding gas, also causes porosity in the weld.

In summary, the current flowing through the welding arc heats the base metal and in the case of GMAW, the electrode and droplets. A pool begins to form which is excited into flow by the temperature and surface tension gradients. The flow causes deeper penetration into the parent metal. The events on the surface of the pool such as droplet impingement, arc pulse, and flow characteristics cause a measurable oscillation, resulting in ripples on the weld surface. Finally, as the arc moves the trailing edge of the weld pool begins to cool. Grains grow from the already solid metal and create the solidified weld. This solidified weld is the final product. Therefore monitoring and control of the weld process is most accurately accomplished by monitoring this interface region where the weld pool events occur and the solidification behavior can be observed.

2.1.1 Pertinent welding codes and inspection criteria

The American Welding Society (AWS), American Society of Mechanical Engineers (ASME) and American Petroleum Institute (API) are three of the more notable code standards for the welding industry. They have produced the D1.1 Structural Welding Code (AWS, 2015), the Boiler and Pressure Vessel Code, Section IX: Qualification Standard for Welding and Brazing Procedures, Welders, Brazers, and Welding and Brazing Operations (ASME, 2010) and Standard for Welding Pipelines and Related Facilities (API, 1999). There are also Military standards, Nuclear standards and several others but these three are the most common. These codes provide rules and regulations for weld sizes and characteristics, weld and welder qualification, and inspection procedures for final acceptance.

These codes dictate weld size based on weld joint geometry and base metal thickness and provide stan-dards for qualifying a weld process as designed by a welding engineer. The final weld specifications give a nominal measurement, typically with some tolerance. These measurements are typically in graduations of 1/16 in. or 1 mm and welds must meet or exceed the specified size. Inspection measurement tools are therefore also typically graduated in 1/16 inch or 1 mm increments and often use a go/no-go type gauge. These gauges should touch the features of concern when laid over the weld if the weld meets the specified size.

The codes also specify allowable flaws in welding. Some flaws are allowable, such as given in AWS D1.1, Table 6.1: Visual Inspection Acceptance Criteria. A typical weld can be undersized by up to 1/16 in or 2 mm for a 3/16 inch or 5 mm or less weld as long as the undersized portion does not exceed 10% of the

(27)

total weld length. Flush welds, as per D1.1 Section 5.23.3.1, are measured to 1/32 in or 1 mm reinforcement for statically loaded members. A certain amount of undercutting as well as porosity are allowed, both of which are measured to 1/32 in or 1 mm for materials less than 1 in or 25 mm thick. The interested reader is advised to review the additional details in these specifications(API, 1999; ASME, 2010; AWS, 2015).

These specifications are important because a geometric measure of on-line weld geometry should be accurate to within the same graduations of these measurement tools and acceptance criteria tolerances to accurately qualify these welds. Therefore the goal of this research will be to provide a minimum of 1/16 in resolution to measure the weld size and ideally 1/32 in to measure flaws and flush weld reinforcement. 2.2 Modeling of the welding process and weld pool.

Modeling is currently the best way to understand the weld pool and the influence of parameter changes. Modeling of the weld pool can be accomplished using several different methods including finite element analysis, analytical models, and commercial software packages that use various “black box” techniques. Many methods exist for modeling the chemical compositions, fluid flow, and especially the post-process weld geometry but the main focus of modeling for this research is on the three-dimensional geometrical models of the molten weld pool, especially for GMAW processes.

In Ma and Zhang’s research involving projection of structured light to determine weld pool curvature, they use a simple model to predict their angles and determine required parameters for visualization (Ma & Zhang, 2011). The model is a basic ellipsoid and cylinder combination that represents the fundamental shape of the weld pool and satisfies some of the more fundamental requirements of weld modeling. The model is generated from Equation 2.1. Using a sample set of variables the model shown in Figure 2.1 was generated.

(28)

F1(X, Y, Z) = 8 > < > : Y2+ (Z − Z 0)2− R2= 0, X < −l, Z > 0 (X+l)2 a2 + Y2 b2 + (Z−Z0)2 c2 − 1 = 0, X > −l, Z > 0

Z = 0, for other points

(2.1)

Other models exist that are significantly more intricate and complex. One of the most complete models was made by Hu, Guo and Tsai (Hu et al., 2008) using a combination of several other model systems. It incorporates the Volume of Fluid algorithm to model the geometry and fluid characteristics and incorporates mass, momentum, thermal energy, and even some chemical compositions. Their model is one of the most complete because it is in 3D and includes droplets, ripples, and even electrical constituents. One of their figures is shown in Figure 2.2.

Figure 2.2: Figure from Hu, Guo and Tsai’s paper(Hu et al., 2008).

Their model is an analytical model solved numerically with a large set of input parameters and very fine time steps, e.g. 10 ms. The model is computationally expensive, but very influential in providing expected results for both understanding and validation. Some of the more notable observations from their work that apply to this research are the crater creation, droplet impingement, and formation of ripples. Their models demonstrate how a droplet impinges on the weld pool, causing a crater to form beneath the welding arc. Then, due to the pool deformation, the fluid is excited into a ripple outward from the arc’s center. The hydrostatic force of the molten metal then pulls the fluid back into the crater, causing the fluid level to decrease. These ripples are approximately 0.45 mm apart and occur every 0.063 s, or about 15.9 Hz. So, to observe these ripples a 0.45 mm resolution with frequency of 32 frames/second, accounting for the Nyquist frequency, would be needed. Note this is a lower frequency than the GTAW weld pool oscillation frequencies previously discussed.

A similar modeling effort from the Battelle Memorial Institute in Columbus, OH in 2004 was done to show three dimensional weld pool activity, including fillet welds (Cao et al., 2004). The results of this research show the fluid flow, effects of droplet velocities, and heat distributions among other factors during bead on

(29)

plate welds, and horizontal and flat fillet welds using GMAW. It is also is one of the few modeling efforts that include fillet welds, one of the areas of interest for this research. One of the key points from this paper is the effect of gravity in the weld pool. The surface tension of the molten metal is often not enough to overcome the effect of gravity and the weld metal tends to deposit more on the horizontal part of the fillet rather than the vertical. This tends to cause more heating and penetration into the horizontal plate as well. This result correlates well with the results form real welding.

2.3 Weld Process Monitoring

Welding process monitoring has been an active area of research with very little convergence toward a widely accepted technique. There are several variations of in-process weld monitoring including, but not limited to, weld waveform monitoring, acoustic sensing, 2D geometry monitoring, and 3D geometry monitoring but the most relevant to this thesis are the techniques involving geometric 2D and 3D monitoring. Xuewu Wang published a paper survey on the current status of three-dimensional vision-based techniques for welding (Wang, 2014). This paper looks at sensing techniques for penetration control. He discusses several techniques falling into the categories of conventional sensing techniques, two-dimensional sensing technologies, and three-dimensional vision sensing technologies. The conventional sensing techniques include acoustics, infrared, oscillation frequency, and arc light methods. The author notes that these only provide one characteristic of the welding process and “sometimes they are not sufficient for penetration prediction especially when influenced by welding environment.” The use of two-dimensional weld pool images led to more characteristic information and successful penetration control. Three-dimensional vision sensing led to even more information including better detection of defect, penetration control, and pool physical process. He finished the paper noting that a successful sensing technology must include effective processing algorithms, robustness, good modeling of the relationship between weld quality and the measurements received from the sensor, and costs should be considered. Another similar, but earlier, survey paper that provides a review of visual sensing of the welding process was published by Saeed in the International Journal of Modeling, Identification and Control (Saeed, 2006). In this paper he does not draw conclusions, but provides a thorough review of most of the more predominant techniques in vision-based weld sensing.

2.3.1 Weld Seam Contactless 3D Analysis by Line Scan

The most prevalent non-contact weld geometry measurements in industry are currently performed by line scan techniques following the torch. Either a laser scanner or structured light stripe follows the welding robot torch and monitors the profile of the weld joint. It is simple, accurate, and fast, making it the method of choice for most industrial monitoring systems. However, it is difficult to use for control because of the

(30)

significant lag time between the solidification of the weld pool and measurement of the resultant geometry. This technique is most commonly used for process monitoring.

Seyffarth and Gaede published a paper outlining many of the basic concepts for structured light moni-toring while welding (Seyffarth & Gaede, 2011). They use a very effective technique to create a 3D model by scanning over the weld seam with a laser and capturing the profile with a camera. This essentially makes a series of 2D images that are analyzed and combined for a 3D scan. This technique is very precise and is accu-rate to 1mm horizontally (x) and 2mm vertically (z), but requires a considerable amount of post-processing. In spite of this, they claim that they can keep up with production rates fairly well. However, they required the use of a very expensive laser scanning system. The concept of this process is presented in Figure 2.3.

A similar technique was developed to further capabilities with a joint research effort at the Beijing Institute of Technology, City University of Hong Kong, and Institute of Automation in China (Li et al., 2010). This research created a similar structured light scanner, as shown in Figure 2.3, but with resolution

Figure 2.3: Control by line scan, figure from (Li et al., 2010)

below half a millimeter in both vertical and horizontal measurement directions. They were able to perform feature extraction of the line to observe the width of the weld groove, width of the weld bead, filling depth, and reinforcement height, as well as existence of defects such as plate displacement, weld bead misalignment and undercutting.

Weld process monitoring by line scan is effective for analyzing the bead geometry after the weld has progressed past the point of inspection far enought that the arc light does not interfere with the laser. This incurs dead time between welding events and sensing and in-process events are missed. The research presented in this work will improve upon the line scan technique by providing in-process monitoring that does not have significant dead time between the weld process events and sensing.

(31)

2.3.2 Weld Pool Imaging, Monitoring and Analysis

Imaging of the weld pool is not a new concept and many commercially available cameras are capable of capturing high-speed images of the welding process as shown in the work being done by Mann and his research group (Mann et al., 2012). They have developed a system to visualize the weld and surrounding area with minimal arc light over exposure. They are able to process the images in real time and produce images of the weld process that show the arc, pool, and surrounding weld metal with minimal pixel saturation by combining several exposures using High Dynamic Range (HDR) algorithms. Because it is difficult to reliably acquire images of the welding process without over saturating areas of interest, this is a good technique for further development.

The analysis of the weld pool can be accomplished in both 2D and 3D. 3D clearly holds a greater amount of information but several publications exist that show how 2D images of the weld pool can be used to effectively determine the behavior of the welding process. Some of the most influential research on the subject of 2D weld pool analysis was done by Zhang, Kovacevic and Li. They published a paper on analysis of the GAW weld pool, but they were more concerned with the freeze boundary geometry and its relation to penetration of the weld than with the surface features of the pool (Zhang et al., 1996). In this paper they inferred three dimensional depth of the weld pool rather than three dimensional height of it, which is more important in welding. The penetration of the weld is what bonds the metals together and is one of the most important aspects of any weld. Their algorithm created a very effective way to determine weld quality based on two dimensional images of the weld pool. They analyze the dimensions and features of interest, as shwon in Figure 2.4, and determine quality based on this interpretation. Their final goal was to use this analysis

(32)

to provide a control system using a neural network.

The Harbin Institute of Technology and Shandong University in China have been working together and are a very active source of publications on 2D analysis of the weld pool for control purposes (Gao & Wu, 2007; Gao et al., 2011; Wu et al., 2004, 2007). They analyze the image and find the boundary of the weld. This weld boundary, or values derived from it, are then fed into varying forms of fuzzy logic and neural network controls, even Support Vector Machines (SVM’s), to provide control of the welding process.

One research group has approached the measurement problem by using two separate cameras to accurately measure the height and width of the weld pool to within 6% maximum error (Xiong & Zhang, 2013). 2.3.3 Weld Pool 3D Monitoring and Analysis by Stereo Vision

Chris Mnich is one of the pioneers in the application of stereo vision for the weld pool (Mnich, 2004). He developed a stereo vision system to acquire images of a pulsed GMAW (P-GMAW) process. He filtered the images such that the arc was minimized or extinguished for the images that were processed, and thereby bypassed the issues involved with acquiring images while the arc is active. From these images he then created a series of disparity maps which were then used to create a time-varying three dimensional model.

Another approach to stereo vision is to use a biprism stereo camera lens, as seen in Figure 2.5 (Zhang

et al., 2012b). This lens uses prisms to divide the view of one camera into two and slightly alters the

Figure 2.5: Biprism stereo techniques using a prism, left, and mirrors, right. Figures from Zhang et al. (2012a); Zhao et al. (2008) respectively

perspective of each view. Typically it splits the image plane down the middle and shifts each perspective out slightly. This is particularly effective for viewing the weld pool because the biprism lens can be built such that it takes up less space than two separate cameras. It also eliminates the image acquisition synchronization issues. Zhang et al. (2012b) were able to create a reconstruction of the weld pool during P-GMAW using this imaging technique and the sum of squared differences block matching algorithm. Their paper includes a single smoothed surface reconstruction. They note that this is effective for analyzing the weld pool phenomena

(33)

and weld quality and is a great start toward the ability to perform closed loop control.

Another research group in the Netherlands used the biprism stereo approach to monitor laser welding with a high speed camera (Zhao et al., 2008). They were able to track the speed of the laser welding keyhole more effectively with this technique. In further work they enhanced their calibration and error analysis and used the system to track a single surface oxide particle as it moves across the surface of the pool during laser welding (Zhao et al., 2009, 2008).

2.3.4 Weld Pool 3D Monitoring by Structured Light

Currently the most active researcher in analysis of the three-dimensional weld pool is Dr. YuMing Zhang, at The University of Kentucky. His techniques are very effective, fast, and adaptable. His technique involves a laser projection of lines (Kovacevic & Zhang, 1997) or dots (Song & Zhang, 2007) onto the weld pool as demonstrated in Figure 2.6. The projection reflects off the specular surface of the weld pool and onto a thin

Figure 2.6: Structured light 3D reconstruction technique, figure from Song & Zhang (2008)

sheet of paper. A camera then observes this paper from the back side and the reflection can be measured and used to calculate approximate values for the length, width, and convexity of the weld pool. This techniques of projecting is commonly referred to as structured light triangulation.

The structured light matrix they use in his most recent paper is 19x19 dots, giving them 361 points. This is good enough to analyze the shape and curvature but not to see the detailed motions of the weld pool. However, he has shown that this is adequate to provide a 3 Hz control system (Liu & Zhang, 2015). This is some of the most advanced and respected work in on-line three-dimensional analysis of the weld pool to date and proves to be adequate for control techniques.

2.4 Three Dimensional Scene Reconstruction and Depth Measurement Techniques

There are many sensing techniques for generating a non-contact, 3D representation of a scene. These sensors are used to discern shapes, orientation, and changes in position, but are limited in resolution, accuracy, and precision. The more capable the sensors are in these aspects, the more useful they become.

(34)

The two most popular techniques for depth measurement in the range of millimeters to hundreds of meters are time of flight and triangulation. Time of flight techniques involve an active measurement of distance by sending a signal and measuring the time it takes for return from the environment. Triangulation techniques use differences in two or more perspectives to recreate a scene. The difference in perspective can either be a passive observation of each perspective or an active projection and passive observation.

Active techniques such as time of flight and projected light have greater control of the measurement and are usually simpler in computation algorithms, but often require more advanced equipment and sensing to emit energy onto the area of interest and detect and measure the response. Passive triangulation does not require an energy source, but it does suffer from the “correspondence problem” of matching corresponding physical locations between the perspectives. Passive systems do tend to be able to acquire data more quickly, but usually require more processing power to overcome the correspondence problem (Webster, 1999).

The Time Of Flight (TOF) principal, as used by LIDAR and TOF cameras, is to measure the time for an emitted energy to make the trip from the energy source to an object that reflects it, and back. Some measure the time delay directly and other techniques use the phase difference between the emitted and received light. Stereo vision has many advantages over structured light and other existing techniques for the application of weld bead investigation, mainly because it is entirely passive. In this research the scene under investigation is filled with high intensity arc light, making active techniques challenging. Time of flight technologies usually rely on infrared wavelengths and therefore are easily disrupted by infrared radiation produced by the welding arc and hot metal. Structured light sources must be radiant enough to overcome the intense arc light or emit an energy source that will not overlap the arc’s energy spectrum in order to provide features that can be extracted for measurement. This usually requires costly laser projection systems. Stereo vision, in contrast, uses the available energy emissions, or in this case, the reflections of arc light and the radiant heat emissions of the weld pool that are in the observable range of the camera’s sensor (e.g. CMOS array). Features with color varying intensity can be observed such as silicates and impurities. These features can then be segmented and tracked. Previous work has also shown that the pixel intensity from the molten and solidifying metal rather than reflection of arc light or any other energy source is a direct function of the heat emissivity of the metal and can be correlated to the temperature of the metal (Gaztanaga et al., 2012). Therefore the stereo vision system can be used to create a three dimensional map over time with heat dissipation characteristics. This is the surface heat, but with the surface heat characteristics the cooling rate of the weld metal can be estimated. Based on these factors, stereo vision is the 3D measurement technique of choice for this research.

(35)

2.5 Stereo Vision Algorithms

Stereo vision, also referred to as stereoscopic vision, is the process of using two or more images to triangulate features in a scene. It works by analyzing the differences between the two images and using the differences to gauge depth. When an object is at an effectively infinite length from the cameras the apparent difference between the two images is negligible. But as the object comes closer to the camera the location of the feature begins to shift in the image plane. When two cameras observe this feature, the shift will occur in opposite directions in the two image planes. So by finding a feature and measuring this relative shift the distance to the feature can be estimated.

The difficulty with the stereo vision approach is in the identification of these features. When humans observe a feature the brain uses a variety of techniques to uniquely identify objects and features and then build relationships and estimate depth. A computer algorithm must identify features, but it is very difficult for a computer to quickly and reliably identify an object or feature. The standard technique is therefore to take a small section of one image and search for a similar section in the other image. This search for a corresponding region is referred to as “correspondence.”

Correspondence is a difficult problem, because of inconsistencies between the images. These inconsisten-cies can be caused by differences in perspective, lighting, textures, and occlusions, among other problems. These differences cause the features in the images to be slightly different, making it very difficult to find a perfect match between the images. Instead, the algorithm must find the best estimate of a match.

2.5.1 Fundamentals of Stereo Vision

The simplest way to find correspondences is to take a small section of pixels from one image and compare them to pixels in the other image. The pixel set with the least differences is then assumed to be the corresponding pixel set. Additional weighting functions can be added to make it more costly for pixel sets to be further from well established matched features. There are many algorithms that can be used, but the two that are most readily available in OpenCV are Block Matching and Semiglobal Block Matching algorithms (Itseez, 2014). OpenCV is a programming library that contains pre-built functions for image manipulation and processing. It is one of the most prevalent image processing development libraries and will be used extensively in this project.

The difference in horizontal location of the two features between two images is referred to as “disparity” (Hoff, 2012). The disparity is calculated from Equation 2.2 (Hoff, 2012) where xleftand xrightare the feature

locations, typically given in pixel values, in the left and right images, respectively.

(36)

The disparity(d) can then be used to calculate the relative depth or distance of a feature from the image plane, Z, with Equation 2.3 (Hoff, 2012) using the camera focal length, f , and the baseline of the stereo camera rig, b, which is the distance between the origins of the back planes of the two cameras. The focal length can be left in the units of pixels but is commonly calculated in world space units, millimeters for example, rather than imaging plane units of pixels and can be referred to as the magnification coefficient when the depth is calculated in world space units.

Z = fb d (2.3) ∆Z = f b d2(−∆d) (2.4) σZ = Z σd d (2.5)

The depth resolution, DZ, of triangulation in stereo vision is a function of distance to the object, as shown by Equation 2.4 (Hoff, 2012). As an object moves farther from the camera it becomes more difficult to discern small lateral shifts in the image plane. The depth resolution is therefore a function of the minimum disparity that can be observed, ∆d, which depends on the camera resolution and can also be dependent on the matching algorithm. The uncertainty in the depth measurement, given in Equation 2.5 (Hoff, 2012), depends on the uncertainty in the disparity, σd, which is usually half a pixel. Therefore the uncertainty of

the depth is dependent on the physical parameters of the camera. Cameras can be tailored to the application by selecting an optimal resolution and adjusting the camera baselines and fields of view.

2.5.2 Survey of Stereo Vision Correspondence Algorithms

The stereo vision measurements must be calculated by finding the correspondence of two regions of an image. This can be accomplished by a variety of techniques. These techniques usually fall into the category of local or global. Local techniques analyze the correspondence between two image sections, or windows, of a specific size. Global algorithms use methods that seek an optimal disparity value for all regions of the entire image. A global algorithm typically has one or more smoothness, or weighting parameters that provide inputs for a global cost function to be optimized based on disparity values (Szeliski, 2010).

In order to identify which matching algorithms would be best suited for this application several stereo matching algorithms were selected from survey papers including papers by Hirschmuller & Scharstein (2008); Szeliski & Zabih (1999), the Szeliski (2010) computer vision book, and from the Middlebury stereo website (Scharstein et al.). From these sources the most promising algorithms were selected for this application based on the criteria of (1) ease of implementation, (2) ability to handle noise, (3) proficiency in smooth and featureless surfaces, and (4) processing time. Based on these 4 evaluation criteria, of all the reviewed

(37)

algo-rithms, the Block Matching, Semiglobal block matching, bilateral subtraction, and Global Correspondence through Scene Flow algorithms were selected for further study.

2.5.3 Block Matching Algorithm

The Block Matching (BM) algorithm is a simple and very common form of stereo correspondence. It is a parametric local technique in which a template of a set window size, typically a square region of 11 pixels by 11 pixels or larger, is selected from one image and compared to regions in the other image (Itseez, 2014). In this application, a sum of absolute differences(SAD) approach is used. This is accomplished by computing a sum of pixel-wise difference in the comparison neighborhood. The SAD value is then used to score the region, using 2.6.

X (i,j) 2 SAD window

|I1(i, j) − I2(x + i, j)| (2.6)

This calculation effectively looks for a square region in Image 2 that has the same intensity sum as the region in Image 1. The scoring is computed across a range and the lowest score location, which indicates the best match, is then used to compute the disparity value assigned to the disparity map.

The search is performed based on a set of parameters. The minimum disparity dictates the start of the correspondence search and the maximum value or range of disparity values are used to limit the range of the search. This provides a cap on the minimum and maximum allowable values as well as limiting the search area, thereby speeding up processing time. The size of the search window changes the amount of searching as well as providing a larger or smaller region for comparison, thereby averaging out noise as well as features. These values must therefore be optimized to specific scenes based on the minimum and maximum depth, feature sizes, texture, and noise expected.

The block matching algorithm is relatively simple, is a good benchmark algorithm and is available in the OpenCV library and therefore relatively easy to implement. It is however, more susceptible to noise. The noise can produce matches that are far away from the actual disparity location. The OpenCV implementation also has several other parameters available to help with these issues but they require extensive parameter tuning (Itseez, 2014).

2.5.4 Semiglobal Block Matching Algorithm

The Semiglobal Block Matching(SGBM) algorithm uses the fundamental principle of block matching but with an additional weighting function (Hirschm¨uller, 2008). The weighting function calculates a local and regional cost of matching for each matching window location, then the correspondence calculation, calculated by sum of squared differences, is selected based on the match and cost. The algorithm typically requires smaller matching windows because of this cost function.

References

Related documents

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

Re-examination of the actual 2 ♀♀ (ZML) revealed that they are Andrena labialis (det.. Andrena jacobi Perkins: Paxton &amp; al. -Species synonymy- Schwarz &amp; al. scotica while

Självfallet kan man hävda att en stor diktares privatliv äger egenintresse, och den som har att bedöma Meyers arbete bör besinna att Meyer skriver i en

Samtidigt som man redan idag skickar mindre försändelser direkt till kund skulle även denna verksamhet kunna behållas för att täcka in leveranser som

The ambiguous space for recognition of doctoral supervision in the fine and performing arts Åsa Lindberg-Sand, Henrik Frisk &amp; Karin Johansson, Lund University.. In 2010, a

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft