Vision-Based Quality Inspection in Robotic Weldingoagm2011.joanneum.at/papers/26.pdf · 2011. 4....

8
Vision-Based Quality Inspection in Robotic Welding Markus Heber, Christian Reinbacher, Matthias R¨ uther, Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology, Austria {mheber,reinbacher,ruether,bischof}@icg.tugraz.at Abstract In this work we present a novel method for assessing the quality of a robotic welding process. While most conventional automated approaches rely on non-visual information like sound or voltage, we introduce a vision-based approach. Although the weld seam appearance changes, we exploit only the information from error-free reference data, and assess the welding quality through the number of highly dissimilar frames. In our experiments we show, that this approach enables an efficient and accurate separation of defective from error-free weldings, as well as detection of welding defects in real-time by exploiting the spatial information provided by the welding robot. Figure 1. Defective and error-free image patches from a sample repetitive welding task 1. Introduction Robotic arc welding is widely used in the automotive industry. It is fast, cheap, and accurate. Al- though the degree of automation in such a process is high, the quality assessment of welded seams is still mostly done manually by experts. Typical modalities used by humans are the visual inspection, the recognition of unusual sounds, or the monitoring of arc voltage. Automated welding quality analy- sis approaches typically rely on indirect measurement modalities like voltage and current [8], welding arc brightness variations [11], sound [12], or 3D structure or shape of the seam in front of the welding torch [13]. The difficulty in automated error detection in robotic welding is due to the large variety of processes and of possible defects. Typically, weld seam images acquired during a welding task are accompanied by heavy noise like smoke, spatter, evaporating water, or gas disturbances. A defective process will not only exhibit small, punctual defects of the weld seam, but is also characterized by significantly increased noise. On the contrary, a non-defective process produces less noise and has a locally repetitive appearance, but it may still exhibit systematic appearance changes during welding though. Figure 1 exemplary depicts defective and error-free weld seam patches from a sample repet- itive welding process, demonstrating that a correct classification is not an easy task. It is practically

Transcript of Vision-Based Quality Inspection in Robotic Weldingoagm2011.joanneum.at/papers/26.pdf · 2011. 4....

Page 1: Vision-Based Quality Inspection in Robotic Weldingoagm2011.joanneum.at/papers/26.pdf · 2011. 4. 29. · accurate separation of defective from error-free weldings, as well as detection

Vision-Based Quality Inspection in Robotic Welding

Markus Heber, Christian Reinbacher, Matthias Ruther, Horst Bischof

Institute for Computer Graphics and VisionGraz University of Technology, Austria

{mheber,reinbacher,ruether,bischof}@icg.tugraz.at

AbstractIn this work we present a novel method for assessing the quality of a robotic welding process. Whilemost conventional automated approaches rely on non-visual information like sound or voltage, weintroduce a vision-based approach. Although the weld seam appearance changes, we exploit onlythe information from error-free reference data, and assess the welding quality through the numberof highly dissimilar frames. In our experiments we show, that this approach enables an efficient andaccurate separation of defective from error-free weldings, as well as detection of welding defects inreal-time by exploiting the spatial information provided by the welding robot.

Figure 1. Defective and error-free image patches from a sample repetitive welding task

1. Introduction

Robotic arc welding is widely used in the automotive industry. It is fast, cheap, and accurate. Al-though the degree of automation in such a process is high, the quality assessment of welded seams isstill mostly done manually by experts. Typical modalities used by humans are the visual inspection,the recognition of unusual sounds, or the monitoring of arc voltage. Automated welding quality analy-sis approaches typically rely on indirect measurement modalities like voltage and current [8], weldingarc brightness variations [11], sound [12], or 3D structure or shape of the seam in front of the weldingtorch [13]. The difficulty in automated error detection in robotic welding is due to the large varietyof processes and of possible defects. Typically, weld seam images acquired during a welding task areaccompanied by heavy noise like smoke, spatter, evaporating water, or gas disturbances. A defectiveprocess will not only exhibit small, punctual defects of the weld seam, but is also characterized bysignificantly increased noise. On the contrary, a non-defective process produces less noise and has alocally repetitive appearance, but it may still exhibit systematic appearance changes during weldingthough. Figure 1 exemplary depicts defective and error-free weld seam patches from a sample repet-itive welding process, demonstrating that a correct classification is not an easy task. It is practically

Page 2: Vision-Based Quality Inspection in Robotic Weldingoagm2011.joanneum.at/papers/26.pdf · 2011. 4. 29. · accurate separation of defective from error-free weldings, as well as detection

impossible to generate a representative set of all possible errors. Also, errors cannot be globally dis-criminated from error-free welding due to local and abrupt or systematic appearance changes duringwelding. Therefore, we propose an automated vision-based welding quality analysis approach, thatuses only reference data, that is error-free, in order to compute a representative model of a weldingprocess. We hereby follow the strategy of acquiring a number of error-free reference samples andbuild a database. During the welding task, images of the newly welded seam are acquired and subse-quently classified, according to their deviations from the reference. To cope with typical appearancechanges during a welding task, we propose to incorporate temporal proximity that is typically givenby timestamps in conjunction with defined robot motion. In the error-case this also allows to ex-actly localize potential defects on the welded seam. The separation of inspected seams into eithererror-free or defective ones is applied during the welding task in a real-time fashion. Although thereexist approaches that automatically assess the quality of robotic welding, there are approaches thatwork online and incorporate an inspection of newly welded seam during the welding process into thequality inspection. Our main contributions are the introduction of a vision based method for assessingthe quality of a robotic welding task in real-time without negative training data that is automaticallytrained, as well as two applications based on the introduced method: First, on automatic classificationof complete weldings, and second, on automatic localization of punctual welding defects.

2. Related Work

The method presented in this paper closely resembles the problem of unusual event detection in videosequences [1, 4]. The significant difference to our approach is, that typical unusual event detectionmethods use large amounts of training data, and in the majority of cases apply learning techniques, inorder to adopt to new observed objects or unseen events. On the contrary, we only exploit informa-tion from very few error-free reference samples, and do not apply learning techniques during qualityinspection. Unusual event detection or one-class classification (OCC) aims at classification of inputdata into usual and unusual events. One can roughly distinguish two approaches to this problem: a)methods based on constant and previously trained models of normality [4, 5, 9, 16], and b) methodswhich try to adapt to new observed scenes in real-time [1, 10].In [1] the authors proposed to learn a model of normality by observing a scene with a static camera.New observations are classified either as statistical outliers, or as normal events, resulting in an adop-tion to the existing model. The method was developed for natural scenes, where the normal activityexhibits a large variability. In contrast to that industrial manufacturing tasks are highly repeatable.Hence, regular deviations are rather small in the error-free case.Unusual event detection without adaption during runtime is applied to a welding quality inspectiontask in [4]. The method utilizes Hidden Markov Models (HMM) to account for the problem of weaklylabeled training sets. Unlike in our approach the authors apply their method in laser welding se-quences to detect irregularities, where a camera monitors the emitted radiation. The quality inspec-tion computations are applied offline, i.e. after the welding process is already finished, and has beenevaluated on roughly 1000 sequences.Kenner [6] introduced a defect detection system for industrial applications in a general context. Thesystem is based on OCC and outlier detection. One-class training data is learned online with the wellknown AdaBoost algorithm. In this way, a strong classifier is obtained from several weak classifiersfor the outlier and defect detection. The system is designed in a very general context, and roboticwelding tasks can also be the field of application.In [11] Schreiber et al. presented a vision based weld seam quality inspection approach. They relyon extracted features like local light distributions extracted around the welding torch, the weld seam

Page 3: Vision-Based Quality Inspection in Robotic Weldingoagm2011.joanneum.at/papers/26.pdf · 2011. 4. 29. · accurate separation of defective from error-free weldings, as well as detection

width, as well as the weld seam position. Although they also incorporate images of welded seam, onlyindirect measurements are used for the final quality score. They evaluated their method on datasetsfrom two curved weld seams, consisting of about 400 image frames each. Another related approachis presented in [14]. The authors utilized the deviations between the welding torch and the seam cen-ter in front of the welding arc as a measure of welding quality. Large deviations here result in poorquality values. However, they utilize indirect quality measures and also do not assess the quality ofthe welded seam.In [2] Fennander et al. introduced a system, that automatically analyzes the regularity of the electricarc in the welding processes. For the detection of arcs, they use k-means clustering to segment imageregions, followed by a droplet localization, using Support Vector Machines (SVM) for classification.This approach is related to our method, as visual information is at least utilized in order to localizethe droplets, which are typical indicators for welding errors.State-of-the-art methods that perform automatic inspection of robotic welding tasks mostly rely onindirect measures like sound or voltage an current [8, 12]. Furthermore, they are typically appliedoffline after the welding task is finished.

3. Methodology

We propose a real-time visual inspection of newly welded seam consisting of three steps: First, error-free reference data is generated in a preparatory task. The image acquisition system, which is thesame for training and classification, is briefly described in Section 3.1. and depict in Figure 2. Sec-ond, we generate a small database of reference images by reducing the redundancy in the referencedata, as described in Section 3.2. Third, classification of new incoming image patches is performed bythresholding the similarities between a patch and reference data entries, as described in Section 3.3.A welding process specific detection threshold, that separates error-free image patches from unusual,unseen ones, is derived automatically from the available reference data.Considering the acquired data, we use the following wording throughout the rest of the paper: Awelding process designates the material, welding parameters, and hardware depending process, ac-complished by an industrial welding robot. A welding sequence designates the welding of a completespecific object, from the start to the end of the seam. They consist of a set of axis aligned imagepatches, acquired during the welding task. In factory automation, a robot continuously repeats thesame process. The data acquired from repeated processes is denoted as welding process dataset.

Figure 2. Data acquisition system

3.1. Data Acquisition

Data acquisition is performed by an industrial image acquisition system. A camera is rigidly mountedto the head of a welding robot, and observes the specimen right behind the welding arc. As such,

Page 4: Vision-Based Quality Inspection in Robotic Weldingoagm2011.joanneum.at/papers/26.pdf · 2011. 4. 29. · accurate separation of defective from error-free weldings, as well as detection

the newly welded seam is observed immediately, and occlusion through further manipulation of thespecimen is prevented. Due to the unknown robot motion during welding the position of newly weldedseam in the acquired image frames is unknown. Therefore, a tracking algorithm follows the locationof the welded seam in the image and allows to extract small axis-aligned patches from each cameraframe. The image acquisition is typically performed within 10− 20fps. Hence, image processing andclassification must be accomplished within 50− 100ms for real-time capability.

3.2. Reference Database Generation

We generate a reference database by applying an unsupervised clustering approach introduced by [3]to the training data. This results in clusters that are highly dissimilar, each describing several similarimage patches. For the reference data generation at least two and typically four complete and error-free welding sequences R are acquired per welding process. As shown in Figure 3 high image overlapsresult in large amounts of redundancy. The overlap mainly depends on the process specific weldingspeed, and typically lies within 10 − 30 frames for a single pixel. Hence, clustering results in asignificant reduction of redundancy. Image patch clustering reduces available reference image patches

Figure 3. Welding image sequence acquired at 20fps with emphasized redundancy

fi, i ∈ [1,N] from reference sequences R to specific prototypes cj, j ∈ [1,M], where M � N.Obtained cluster centers cj are given by a tuple, consisting of a representative image patch c, andseveral timestamps t1 · · · tn from image patches that vote for cj:

cj = {c, [t1 · · · tn]} with c ∈ fi (1)

We use Affinity Propagation (AP) [3] for clustering, because it has shown state-of-the-art perfor-mance for a variety of unsupervised clustering problems [7, 15] without complex parametrization.AP identifies prototypes out of a set of arbitrary data points by passing messages in between, with apairwise similarity measure between all data points given as input. Instead of comparing incomingimage patches against each reference image patch, we only keep M prototypes and correspondingtimestamps for quality inspection. Typical values for N and M are ≈ 200 and 10 − 20 respectively.The usage of temporal proximity given by timestamps, defining the welding robot motion, furthersimplifies the classification problem, because incoming image patches are related to a subset of ob-tained cluster centers. Furthermore, potential welding defects can be exactly located on the weldedseam, up to a synchronization gap ∆ of few ns (u 0.5px), caused by the acquisition system. Eachobtained prototype cj is an exemplar of the initial reference data and is accompanied by appropriatetimestamps [t1 · · · tn] from all image patches, that vote for prototype cj . In order to be robust againstsystematic appearance changes, and in order to be able to exactly locate potential welding defects,each unseen incoming image patch p with timestamp tp is compared to the cluster center cj withminimal temporal gap. Hence, we identify the minimum time difference cluster center C, and use the

Page 5: Vision-Based Quality Inspection in Robotic Weldingoagm2011.joanneum.at/papers/26.pdf · 2011. 4. 29. · accurate separation of defective from error-free weldings, as well as detection

corresponding image patch for similarity evaluation against the actual image patch p:

C = cj if tp ±∆ ∈ [t1 · · · tn] (2)

sp = S (p, C{c}) , S : R2 × R2 → R (3)

3.3. Defect Detection

In continuous industrial use, each unseen image patch is compared to the reference data entries byevaluation of a similarity measure. Image patches that exhibit high dissimilarities are rated as unusualevents. Classification of patches p into error-free or defective is given by the computed similarityvalues sp, respectively. If sp falls below a process specific detection threshold θ, p is classified asdefective. Basically, any similarity metric S : R2 × R2 → R, which allows a pairwise comparison ofimage patches, can be applied. We chose the two-dimensional normalized cross correlation (NCC),which is commonly used in computer vision for template matching. It is simple, fast, and morerobust to lighting changes, compared to e.g. the sum of squared differences (SSD). We perform robustmatching by applying a rotating and sliding window approach as depict in Figure 4. Reference imagepatches are rotated by αi and cropped to overcome slightly translational and rotational misalignmentsduring image acquisition, resulting in more robust matching. Scale variations could be considered bya scale-space oriented approach, but as camera and welding robot are rigidly connected, resulting in aconstant working distance for the complete welding process, this is not necessary in our case. During

Figure 4. Robust rotation and translation aware matching

welding each newly acquired image patch p is assigned a similarity sp to clustered reference data. Allpatches with a similarity value sp smaller than a process-specific detection threshold θ are reportedas unusual events, and consequently as potential welding defects. We determine θ from availablereference sequences R in a way, that a welding sequence can be classified either into error-free orerroneous without any further information or learning efforts. We used the median as well as a safetymargin δ, which was set to 0.09 = 9% within our experiments, to be robust against potential outliers:

θ = median (S (Ri, Rj))− δ, i 6= j and i, j ∈ [1 · · · |R|] (4)

4. Experiments

Based on the introduced method, we propose two different applications for robotic welding qualityinspection: Application A separates complete welding sequences into either error-free or defectiveones. We show that a single unusual event suffices to classify the corresponding sequence as defective.Application B exactly locates punctual defects on the welded seam. This application is of interestfor welding tasks, where small punctual defects might be tolerated considering quality inspection.We evaluate our method on 9 different welding process datasets. The corresponding sequences are

Page 6: Vision-Based Quality Inspection in Robotic Weldingoagm2011.joanneum.at/papers/26.pdf · 2011. 4. 29. · accurate separation of defective from error-free weldings, as well as detection

characterized by extreme illumination changes, heavy smoke, spatter and spilling, curved weldings,different welding speeds, and different materials. In this way, we demonstrate that our method cancope with the high variability and noise of industrial welding processes. To allow a quantitativeanalysis of welding defects, evaluated welding sequences as well as corresponding image patcheshave been manually classified into error-free and defective by an expert. Each welding process datasetconsists of 11 error-free and 9 erroneous welding sequences. Evaluated sequences contain between103 and 231 image patches of size 161× 161px. Erroneous welding sequences show typical weldingerrors like holes, narrow weld seam regions, blisters, deformations, or gaps. Four out of ten error-freesequences are chosen for reference data generation. The remaining sequences are used for methodevaluations. Overall, 153 welding sequences consisting of a total of 25166 welding image patchesare evaluated. Considering run-time, our Matlab implementation of the proposed method reachedan average processing rate of 51.20fps on an Intel Core i7 2.8 GHz processor. Hence, real-timeprocessing at typical image acquisition rates of 10− 20fps is definitely feasible.

4.1. Application A: Sequence Classification

The goal is to classify complete welding sequences into either defective or error-free, regardless of thetype of error that occurred. As soon as a single similarity value exceeds the reference threshold θ, thesequence is classified as defective. In Figure 5(a) the number of unusual events is shown over varyingsimilarity thresholds for a sample welding process. The reference threshold θ and the ideal thresholdare marked, where ideal denotes that no unusual events are detected within error-free weldings. Themarked distance d for the reference threshold θ illustrates, that a desired separation of error-free anddefective welding is feasible. Our experimental evaluations show that 0.09 = 9% is a reliable valuefor δ for the reference threshold computation, as an average classification accuracy of 93% has beenachieved. Table 4.2. provides quantitative results for the 9 different welding processes.

4.2. Application B: Defect Localization

Considering e.g. weldings on the base plate of cars, very long sequences might be welded. Here,discarding of objects with small punctual welding defects can be unnecessary and costly. Knowledgeon the exact location of welding defects provides the option of e.g. automatic repair or targeted man-ual inspection afterwards. Due to the availability of ground truth data, provided by an expert, truepositives (correct defect detections), false positives (incorrect defect detections), true negatives (cor-rect error-free detections), and false negatives (incorrect error-free detections) are evaluated for eachwelding sequence, respectively. Examples of correctly detected welding defects as well as error-freeimage patches from welding process with ID 4 are shown in Figure 5(b). Summarized quantitativeresults for all 9 evaluated welding processes are presented in Table 4.2. Overall, an average accuracyof 94% for defect localization has been achieved.

5. Conclusion

We have introduced a robust welding quality analysis method. In our experiments we have presentedtwo quality inspection applications, based on the proposed method. They have been evaluated onseveral welding processes, which cover a large variability of industrial robotic welding tasks. Con-sidering the separation of weldings into defective and error-free we achieved an average accuracyof 93%, where for the exact defect localization an average accuracy of 94% has been achieved.Furthermore, our approach benefits from usage of very few non-negative reference data. Generated

Page 7: Vision-Based Quality Inspection in Robotic Weldingoagm2011.joanneum.at/papers/26.pdf · 2011. 4. 29. · accurate separation of defective from error-free weldings, as well as detection

(a) (b)

Figure 5. Unusual events over varying thresholds, and sample detection results from welding process with ID 4

Application A Application BID θ Images TP TN FP FN OK% TP% TN% FP% FN% OK%

1 0.87 3453 10 5 1 0 93.75 11.29 84.94 0.14 3.62 96.232 0.84 3359 8 8 0 0 100.00 27.39 69.81 2.20 0.59 97.203 0.76 3626 5 9 3 0 82.35 14.75 76.74 6.48 2.12 91.494 0.85 3032 11 6 0 0 100.00 12.04 86.15 1.48 0.33 98.195 0.85 3311 9 6 1 0 93.75 4.08 82.63 2.11 11.17 86.716 0.83 3422 10 4 0 2 87.50 6.58 86.12 0.00 7.31 92.707 0.74 1664 11 4 1 0 93.75 8.11 90.38 0.30 1.80 98.498 0.77 1652 10 4 1 1 87.50 4.84 92.13 0.30 2.72 96.979 0.78 1647 9 7 0 0 100.00 15.79 77.23 0.91 6.07 93.02

Table 1. Numerical results for classification of welding sequences (Application A), and for localization of weldingdefects (Application B) for 9 different welding processes

Figure 6. Samples for remaining problematic welding patches

databases typically consist of 10 − 20 cluster centers. This results in reduced costs for referencedata generation, and in real-time capability. Industrial applications usually require a high degree ofrobustness and real-time performance. Experimental results empirically show that a separation ofwelding sequences into defective and error-free weldings is feasible, accurate, and fast. Furthermore,localization of welding defects is possible due to incorporation of information on the robot motion.Considering the amount of reference data used for our initialization step, more training sequenceswould result in more accurate welding process models, which allows detection of welding defects ata finer level. The detection of slight contractions considering weld seam width are not feasible withonly four reference sequences. We also observed that the chosen similarity metric (NCC) cannot copewith severe illumination and intensity variations caused by e.g. gas disturbances. Future work willaddress the classification of problematic image patches, exploitation of the present redundancy, andincorporation of additional measures like the seam width and shape.

Page 8: Vision-Based Quality Inspection in Robotic Weldingoagm2011.joanneum.at/papers/26.pdf · 2011. 4. 29. · accurate separation of defective from error-free weldings, as well as detection

Acknowledgments: This study has been conducted within the COMET K-Project ”Embedded Com-puter Vision” (ECV). The authors would also like to thank the colleagues at Fronius Ltd. for theirvaluable contributions to this study.

References

[1] M. Breitenstein, H. Grabner, and L. Van Gool. Hunting nessie: Real time abnormality detectionfrom webcams. In Proceedings ICCVÆ09 WS on Visual Surveillance, 2009.

[2] H. Fennander, V. Kyrki, A. Fellman, A. Salminen, and H. Kaelviaeinen. Visual measurementand tracking in laser hybrid welding. Jour. on Machine Vision and App., 20(2):103–118, 2007.

[3] Brendan J. Frey and Delbert Dueck. Clustering by passing messages between data points. Sci-ence, 315:972–976, 2007.

[4] M. Jager, C. Knoll, and F.A. Hamprecht. Weakly supervised learning of a classifier for unusualevent detection. IEEE Transactions on Image Processing, 17(9):1700 –1708, 2008.

[5] Neil Johnson and David Hogg. Learning the distribution of object trajectories for event recog-nition. Journal on Image and Vision Computing, 14:609–615, 1996.

[6] Thomas Kenner. Fehlererkennung mittels one-class boosting. Master’s thesis, Institute forComputer Graphics and Vision, Graz University of Technology, 2007.

[7] X. Li, H. Su, and J. Chu. Multiple model soft sensor based on affinity propagation, gaussianprocess and bayesian committee machine. Chin. Jour. of Chem. Engineering, 17:95–99, 2009.

[8] H. Ma, S. Wei, Z. Sheng, T. Lin, and S. Chen. Robot welding seam tracking method based onpassive vision for thin plate closed-gap butt welding, 2009.

[9] Ellis T. Makris D. Learning semantic scene models from observing activity in visual surveil-lance. In IEEE Transactions System Manufacturing of Cybernetic - Part B Cybernetics, 2005.

[10] Yael Pritch, Alex Rav-acha, Avital Gutman, and Shmuel Peleg. Webcam synopsis: Peekingaround the world. In IEEE International Conference on Computer Vision (ICCV), 2007.

[11] D. Schreiber, L. Cambrini, J. Biber, and B. Sardy. Online visual quality inspection for weldseams. Int. Jour. of Advanced Manufacturing Techn., 42(5-6):497–504, 2008.

[12] J. F. Wang, B. Chen, H. B. Chen, and S. B. Chen. Analysis of arc sound characteristics for gastungsten argon welding. Journal on Sensor Review, 29(3):240–249, 2009.

[13] P. Xu, G. Xu, X. Tang, and S. Yao. A visual seam tracking system for robotic arc welding. Int.Jour. of Advanced Manufacturing Techn., 37(1-2):70–75, 2007.

[14] Z. Yan and D. Xu. Visual tracking system for the welding of narrow butt seams in containermanufacture. In Proceedings UKACC International Conference on Control 2008, 2008.

[15] X. Zhang, P. Lu, H. Suo, Q. Zhao, and Y. Yan. Robust speaker clustering using affinity propa-gation. IEICE Transactions on Information and Systems, 11:2739–2741, 2008.

[16] H. Zhong, J. Shi, and M. Visontai. Detecting unusual activity in video. In Proceedings CVPRConference on Computer Vision and Pattern Recognition, 2004.