Robotic Arm Pick-and-place System for L-Shaped Water Pipe ...

5
Robotic Arm Pick-and-place System for L-Shaped Water Pipe Joint with Arbitrary Angle Wen-Chang Cheng Department of Computer Science & Information Engineering Chaoyang University of Technology Taichung, Taiwan [email protected] Hung-Chou Hsiao Department of Information Management Chaoyang University of Technology Taichung, Taiwan [email protected] Yuan-Pin Lin Department of Computer Science & Information Engineering Chaoyang University of Technology Taichung, Taiwan [email protected] Yu-Hang Liu Department of Computer Science & Information Engineering Chaoyang University of Technology Taichung, Taiwan [email protected] Abstract—The system combines a camera and a mechanical arm to complete the L-shaped water pipe joint picking system with arbitrary angle. The system is mainly divided into two parts: computer side and arm side. The computer is responsible for observing the current position and angle of the workpiece, receiving the image of the workpiece and performing image processing operations (such as binarization, contour detection), and at the same time converting the coordinate and angle of the workpiece into the coordinate and angle of the robot arm and transmitting it End. The arm end receives the coordinates and angle from the computer end and performs the corresponding picking action. The system sets the workpiece center in combination with the rotation point and picking point in the minimum contour rectangle where the workpiece is found to binarization to pick the workpiece. During the experiment, the conditions of the factory environment were also simulated. During the day, noon and night, four, three, two and one fluorescent lamp were turned on, and 100 tests were performed on 12 situations. The average success rate is as high as 99%. Experiments show that the system can be actually used in factories, saving related manpower and material resources. Keywords—Contour Detection, Perspective Transformation, Corner Detection, Factory Automation, Industry 4.0, Factory Intelligence, Grasp Pose Detection. I. INTRODUCTION The development of robotic arms in factory automation has been quite extensive[1]. At present, in many automated factories, in order to enable the robot to pick, it is still necessary to fix the feed in the same direction and position by hand or machine. This will cause waste of manpower and increase equipment cost. So this system combines the camera and the robotic arm, through image processing, the robotic arm can see the position and angle of the workpiece through the camera, then control the robotic arm to move to the exact coordinates and rotate the gripper to pick the workpiece, finally, place the workpiece to the specified position. In this way, no manual placement, reduce manpower waste and speed up processing, thereby improving quality and achieving factory intelligence[2]. In the application of workpiece picking, the primary goal is to find objects. S. R. Balaji et al[3] reviewed the traditional image processing object detection and tracking methods, which can be divided into three methods: Frame Difference, Background Subtraction and Optical Flow. Shou-Cih Chen[4] proposed a method to construct the edge contour of an object by extending the texture between blocks, and at the same time to retain the effective edge feature of the edge segment of the longer segment in the image, so as to effectively reduce the edge loss phenomenon caused by the threshold setting. In addition, some methods use additional devices to improve the accuracy of object location detection. Jia-Wei Hsu[5] proposed in order to find out the actual spatial coordinates of the workpiece in the system, gray code structured light scanning technology was developed to reconstruct the 3D spatial information of the workpiece on the processing platform. Jun-Hao Wu[6] used dual CCD for image identification, 3d scene reconstruction and image positioning. Yi-Hao Hsieh[7] used automatic optical detection to obtain the image of the workpiece, detect the side contour and calculate its error. In recent years, object detection technology based on deep learning has become popular [8-14]. Sheng-Chieh Chuang[15] proposed a robotic arm stacking workpiece clipping system based on RGB-D image and combined with the deep learning network model to predict the position of the workpiece clipping frame, with an average picking success rate of 80.5%. Joseph Redmon et al [8-10] proposed YOLO technology, which uses a frame method to detect the regression problem of spatial partition of objects as boundary and related category probability, and simultaneously predicts the position and category of objects in the film through neural network calculation. Heng Guan et al. [16] based on the Fully Convolutional Neural Network gripping method. The input of the network is the image RGB and depth. After calculating through a custom network layer, output the rotation angle of the robot arm, the width of the grasper opening and the probability of successful grasping. The success rate of grasping is 85.6%. Ian Lenz et al.[17] based on two Deep Networks. The first network has few features, but it performs quickly and effectively filter non-target objects. The second network is more powerful and slow, but performs top detection. Then a structured regularization method n the weights based on multimodal group regularization is proposed to handle with multimodal input. Successfully perform grasps actions on two different robotic platforms. The application of this study in a pure factory environment requires immediacy and accuracy, so we still use the traditional way to process images.

Transcript of Robotic Arm Pick-and-place System for L-Shaped Water Pipe ...

Page 1: Robotic Arm Pick-and-place System for L-Shaped Water Pipe ...

Robotic Arm Pick-and-place System for L-Shaped

Water Pipe Joint with Arbitrary Angle

Wen-Chang Cheng

Department of Computer Science

& Information Engineering

Chaoyang University of

Technology

Taichung, Taiwan

[email protected]

Hung-Chou Hsiao

Department of Information

Management

Chaoyang University of

Technology

Taichung, Taiwan

[email protected]

Yuan-Pin Lin

Department of Computer Science

& Information Engineering

Chaoyang University of

Technology

Taichung, Taiwan

[email protected]

Yu-Hang Liu

Department of Computer Science

& Information Engineering

Chaoyang University of

Technology

Taichung, Taiwan

[email protected]

Abstract—The system combines a camera and a mechanical

arm to complete the L-shaped water pipe joint picking system with

arbitrary angle. The system is mainly divided into two parts:

computer side and arm side. The computer is responsible for

observing the current position and angle of the workpiece,

receiving the image of the workpiece and performing image

processing operations (such as binarization, contour detection),

and at the same time converting the coordinate and angle of the

workpiece into the coordinate and angle of the robot arm and

transmitting it End. The arm end receives the coordinates and

angle from the computer end and performs the corresponding

picking action. The system sets the workpiece center in

combination with the rotation point and picking point in the

minimum contour rectangle where the workpiece is found to

binarization to pick the workpiece. During the experiment, the

conditions of the factory environment were also simulated. During

the day, noon and night, four, three, two and one fluorescent lamp

were turned on, and 100 tests were performed on 12 situations.

The average success rate is as high as 99%. Experiments show that

the system can be actually used in factories, saving related

manpower and material resources.

Keywords—Contour Detection, Perspective Transformation, Corner

Detection, Factory Automation, Industry 4.0, Factory Intelligence,

Grasp Pose Detection.

I. INTRODUCTION

The development of robotic arms in factory automation has been quite extensive[1]. At present, in many automated factories, in order to enable the robot to pick, it is still necessary to fix the feed in the same direction and position by hand or machine. This will cause waste of manpower and increase equipment cost. So this system combines the camera and the robotic arm, through image processing, the robotic arm can see the position and angle of the workpiece through the camera, then control the robotic arm to move to the exact coordinates and rotate the gripper to pick the workpiece, finally, place the workpiece to the specified position. In this way, no manual placement, reduce manpower waste and speed up processing, thereby improving quality and achieving factory intelligence[2].

In the application of workpiece picking, the primary goal is to find objects. S. R. Balaji et al[3] reviewed the traditional image processing object detection and tracking methods, which can be divided into three methods: Frame Difference,

Background Subtraction and Optical Flow. Shou-Cih Chen[4] proposed a method to construct the edge contour of an object by extending the texture between blocks, and at the same time to retain the effective edge feature of the edge segment of the longer segment in the image, so as to effectively reduce the edge loss phenomenon caused by the threshold setting. In addition, some methods use additional devices to improve the accuracy of object location detection. Jia-Wei Hsu[5] proposed in order to find out the actual spatial coordinates of the workpiece in the system, gray code structured light scanning technology was developed to reconstruct the 3D spatial information of the workpiece on the processing platform. Jun-Hao Wu[6] used dual CCD for image identification, 3d scene reconstruction and image positioning. Yi-Hao Hsieh[7] used automatic optical detection to obtain the image of the workpiece, detect the side contour and calculate its error.

In recent years, object detection technology based on deep learning has become popular [8-14]. Sheng-Chieh Chuang[15] proposed a robotic arm stacking workpiece clipping system based on RGB-D image and combined with the deep learning network model to predict the position of the workpiece clipping frame, with an average picking success rate of 80.5%. Joseph Redmon et al [8-10] proposed YOLO technology, which uses a frame method to detect the regression problem of spatial partition of objects as boundary and related category probability, and simultaneously predicts the position and category of objects in the film through neural network calculation. Heng Guan et al. [16] based on the Fully Convolutional Neural Network gripping method. The input of the network is the image RGB and depth. After calculating through a custom network layer, output the rotation angle of the robot arm, the width of the grasper opening and the probability of successful grasping. The success rate of grasping is 85.6%. Ian Lenz et al.[17] based on two Deep Networks. The first network has few features, but it performs quickly and effectively filter non-target objects. The second network is more powerful and slow, but performs top detection. Then a structured regularization method n the weights based on multimodal group regularization is proposed to handle with multimodal input. Successfully perform grasps actions on two different robotic platforms. The application of this study in a pure factory environment requires immediacy and accuracy, so we still use the traditional way to process images.

Page 2: Robotic Arm Pick-and-place System for L-Shaped Water Pipe ...

To simplify the system, this system uses a single camera with image processing technology to find the contour of the workpiece. After obtaining the angle and coordinates of the workpiece, they are converted into the coordinate system of the robotic arm through the coordinate conversion, so that the robotic arm can automatically reach the picking position and rotate to the corresponding angle to pick. Finally, the workpiece is placed to the designated position, the purpose of reducing labor costs and increasing production speed.

II. FLOWCHART

The flowchart of this system can be divided into computer end and robotic arm end, the following instructions.

Fig. 1 is a flowchart of the computer side. First turn on the camera to capture the image, then convert the image into a grayscale image, and binarize the converted grayscale image. Then the camera will start to recognize the contour. If the workpiece is not found, it will search for the workpiece again. Otherwise, if the camera finds the workpiece, it will use the picking point and angle of the minimum rectangular workpiece to convert the image coordinates to the arm coordinates using the change matrix. Then send the coordinates to the arm side to complete the work on the computer side, and then see if the program will continue to execute. If not, continue to capture images, if yes, the program ends.

Figure 1. FLOWCHART ON THE COMPUTER SIDE

Fig. 2 is a flowchart of the arm end. The arm end waits for the coordinates and angle from the computer end. If it has not been received, it will continue to receive. If it is received, move

the arm above the picking point, then rotate the gripper to the corresponding angle to pick. Clamp the workpiece to the specified position. After the items are placed, determine whether to end the system operation, and then shut down the system, otherwise continue to wait for the computer to send the coordinates and angle.

Figure 2. FLOW CHART OF THE ROBOTIC ARM SIDE

III. METHOD

This section will introduce the principle and formula description of each usage method.

A. Image preprocessing

Because this system uses contour detection to find the position of the workpiece, it converts color images into

grayscale images(As shown in Fig. 3(a)), then binarize the

grayscale image into a binary image, As shown in Fig. 3(b). From Fig. 3(b), it can be found that the grayscale value of the workpiece is greatly different from the background. After binarization, the position of the workpiece can be highlighted, but this method also highlights other parts that are not the workpiece, so we only keep the Region of Interest (ROI). As shown in the red box area in Fig. 3(b). Finally, get the result with only the workpiece left, as shown in Fig. 3(c).

B. Find workpiece and pick point detection

We use the connected component detection technology to

find the contour of the workpiece, and then use the minimum

rectangle to describe the contour of the workpiece, which is

implemented in this paper. We use the minAreaRect() function

in OpenCV[18] to find the minimum rectangle describing the

contour, and find the rotation angle with the horizontal axis

Start

Capture the image

Image preprocessing

Find the pick

coordinate Angle

Find the minimum rectangle for the

workpiece

Find the

workpiece?

Convert robotic arm

coordinate angle

Finish?

End

No

Yes

No

Yes

Send coordinate

angle to robotic arm

Start

receive?

The robotic arm moves

to coordinates

Rotate the jaw to the

corresponding Angle

For pick up

pick to the specified

location

Finish?

End

No

No

Yes

Yes

Waiting to

receive

Page 3: Robotic Arm Pick-and-place System for L-Shaped Water Pipe ...

(clockwise is positive), as shown in Fig. 4, where the center

coordinate of the workpiece is (x, y) = (359, 143), the angle is

64.65 degrees (anticlockwise) corresponding to the horizontal-

axis.

(a)

(b)

(c)

Figure 3. IMAGE PREPROCESSING (A)

GRAYSCALE IMAGE (B) BINARY IMAGE (C)ROI IMAGE

Figure 4. THE MINIMUM RECTANGLE OF THE WORKPIECE CONTOUR

This system is an arbitrary angle clamping system for

specific workpiece, so there will be different clamping points.

As shown in Fig. 4. The system takes the minimum rectangular

center point of the workpiece contour as the rotation center

point (red point) of the arm gripper work. And the three picking

points are the upper left corner (yellow point), the upper right

corner (green point) and the bottom right corner (blue point). In

conjunction with the banalization process, the workpiece

position is set to 255(white), and the rest is set to 0 (black). The

picking rule is (1) The yellow point is 0, the green point and the

blue point are both 255, and the pick point is green. (2) The

yellow point is 255, the green point is 0, and the blue point is

255, then yellow or blue is the pick point. (3) The yellow point,

green point and blue point are all 255, and the pick point is

yellow or blue. (4) Both the yellow and green point s are 255,

and the blue point is 0, then the pick point is the green point.

(As shown in Table I)

A simpler determination is that when the yellow point and

the blue point are 255 at the same time, the pick point is the

yellow point or the blue point. If the yellow point is 0, the blue

point is 255 or the yellow point is 255, and the blue point is 0,

the pick point is green. (As show in table II)

TABLE I. THE PICK POINTS INSPECTION

Yellow

Point

Green

Point

Blue

Point

Pick

point

0 255 255 Green

255 0 255 Yellow or Blue

255 255 255 Yellow or Blue

255 255 0 Green

TABLE II. THE EASY PICK POINTS INSPECTION

Yellow

Point

Blue

Point

Pick

point

0 255 Green

255 255 Yellow or Blue

255 0 Green

As shown in Fig.3(c), the yellow point after the

banalization is 0, and the green and blue points will be 255, so

the pick point is the green (u, v).

C. Coordinate conversion

Perspective Transformation is to correct the image tilting

and distortion caused by the projection of 3D image coordinates

taken by the camera to a new 2D plane coordinates, also known

as projective mapping. Input the corresponding 4 points of the

original image and the transformed image, and the

transformation matrix can be obtained through formula (1).

General transformation formula:

(𝑥𝑦𝑤

) = (

𝑚11 𝑚12 𝑚13

𝑚21 𝑚22 𝑚23

𝑚31 𝑚32 𝑚33

) (𝑢𝑣1

) = 𝑴 (𝑢𝑣1

) (1)

and

𝑋 =𝑥

𝑤 and 𝑌 =

𝑦

𝑤 (2)

(X, Y) are the coordinates after conversion. In this paper, we use

the functions getPerspectiveTransform() provided by

OpenCV[19], the M transformation matrix can be obtained

from the coordinates of four common plane points.

IV. RESULT

The Mitsubishi robotic arm used in this system (as shown

in Fig. 5.) has six axes, each of which is independently

Page 4: Robotic Arm Pick-and-place System for L-Shaped Water Pipe ...

controlled by a motor and computer-programmed. It is

equipped with a conveyor belt and a set of sensors to establish

a standard Process control.

Figure 5. MITSUBISHI ROBOTIC ARM

First of all, the experiment needs to complete the

calculation of the conversion matrix. We use the checkerboard

as a model to record the coordinates of any four-point corners

in the image. In this experiment, we sample the four left-corner

points such as upper left, upper right, lower left, and lower

right, as shown in the left part of Fig. 6(a)-(d). Next, actually

move a reference point of the arm to the position of the four-

point corner sampled in the corresponding chessboard image,

as shown in the right parts of Fig. 6(a)-(d). The results shown

in Table III.

TABLE III. CONVERSION MATRIX COORDINATES

Coordinates

Locations

image(pixel) robotic(mm)

x y X Y

upper left 181 140 403.2 -118.2

upper right 329 127 401.3 -16.1

bottom left 187 233 509.8 -188.2

bottom right 336 222 502.5 -16.1

After the angular coordinates and robotic arm coordinates

obtained for this example, the M value can be obtained after the

calculation of the perspective conversion function.

𝑴 = [0.3275 1.2832 207.92271.2711 −0.0948 −425.36580.0006 0.0000 1.0000

] (3)

We simulate the intensity of light in the factory and

experiment to analyze whether the light source affects the

success rate of the clamping system. The experimental results

of turning on 4 fluorescent lamps, 3 fluorescent lamps, 2

fluorescent lamps and 1 fluorescent lamp during the day, noon

and night are shown in Table IV. Among the 1200 tests, 12

times the unsuccessful workpiece contour success rate is 99%,

and the case where the contour of the workpiece is not caught

is turned on a small number of lights at night, so we can find

that the ambient light of the test may be insufficient The photos

taken by the camera will not allow the image processing to

correctly identify the contour of the workpiece.

(a)

(b)

(c)

(d)

Figure 6. ANGLE AND ROBOTIC ARM CORRECTION CHART (A)UPPER LEFT (B)UPPER RIGHT (C)BOTTOM LEFT (D)BOTTOM RIGHT

Table V is a comparison of the grasps of a single object.

Although this study USES traditional methods, it is more

suitable to be applied in the actual factory environment. The

image size in this study is 640x480, and the hardware

specification is also old Intel(R) Core(TM)DUO CPU E7400

2.80GHz with 2048MB memory. Actual factory environment

requires better grasp success rates and faster calculations rather

than new technologies.

V. CONSLUSION

This system combines the robot arm with the camera, and USES the camera to find out the minimum rectangular outline of the workpiece, the coordinates and angles of the picking point, so that the robotic has the same vision like human beings, saving

Page 5: Robotic Arm Pick-and-place System for L-Shaped Water Pipe ...

manpower and equipment. By adding the robotic to the vision improve efficiency and accuracy, so the production process and efficiency of the factory can be optimized, cost can be saved, and the benefits of the factory can be optimized. In addition, the results of the simulated plant environment are based on tests. 12 of the 1,200 tests failed to capture the contour of the workpiece, and the average success rate was 99%. At the same time, the contour of the workpiece was not captured at night and only a few fluorescent lamps were turned on. It can be known that if the operating ambient light is insufficient It prevents the camera from correctly recognizing the position and contour of the workpiece. Therefore, the high success rate can be maintained in the factory only by making the ambient light intensity sufficient. In this system, the traditional image processing method is still used to find the workpiece. Our method is simple, but more suitable for the factory environment. However, in the future, new technologies will be added to the system. The single depth camera will be used together with Object Detection technology in deep learning to automatically find out the workpiece of different shapes, and then detect the precise position and Angle of the workpiece through image processing, so that the system can be more widely used.

TABLE IV. THE RESULTS OF 100 INDIVIDUAL TESTS WITH DIFFERENT

NUMBERS OF LIGHTS TURNED ON AT DIFFERENT TIMES TO PICK THE

WORKPIECE CONTOUR

Number

of lamps Day Noon Night Average

4 100% 100% 100% 100%

3 100% 100% 97% 99%

2 100% 100% 97% 99%

1 100% 100% 94% 98%

Average 99%

TABLE V. GRASPING A SINGLE OBJECT

Methods Grasp Success

Rate(%) Computation

Time(sec)

Deep learning[20] 89.0 13.500

DexNet2.0[21] 80.0 0.800

Heng Guan et al. [16] 85.6 1.150

Our method 99.0 <0.001

ACKNOWLEDGMENTS

This study was supported by the campus program of Chaoyang university of science and technology (program number: 9634)

REFERENCES

[1] Sam Francis, “Special report: robotics and automation in automotive manufacturing and supply chains,” Robotics & Automation, 2020/02/24, Available online:https://roboticsandautomationnews.com/2020/02/24/special-report-robotics-and-automation-in-automotive-manufacturing-and-supply-chains/30374/(accessed on 2020/03/26).

[2] Nur Hanifa Mohd Zaidin, Muhammad Nurazri Md Diah and Po Hui Yee, Shahryar Sorooshian, “Quality Management in Industry 4.0 Era,” Journal of Management and Science 8(2), pp. 82-91, 2018.

[3] S. R. Balaji and S. Karthikeyan, “A survey on moving object tracking using image processing," International Conference on Intelligent Systems and Control (ISCO), Coimbatore, India, Jan 5-Jan 6, 2017.

[4] Shou-Cih Chen, “The Study of Target Detection Technology with Contours,"Master's thesis of National Defense University School of Defense Sicence, Chung Cheng Institute of Technoogy, 2019.

[5] Jia-Wei Hsu, “Automatic Workpiece Alignment of CNC Machining by Gray Code Structured Light, " Master's thesis of Department of Mechanical and Computer-Aided Engineering, National Formosa University, 2018.

[6] Jun-Hao Wu,“An Application of Double CCD Stereoscopic Vision on the Task of Object Craning,"Master's thesis of Department of Automatic Control Engineering, Feng Chia University, 2017.

[7] Yi-Hao Hsieh,“Tri-Dexel-Based 3D Surface Smoothing and Feature Sharpness Enhancement Using Marching Cube and Workpiece Profile Measurement,"Master's thesis ofDepartment of Computer Science and Information Engineering, National Cheng Kung University, 2016.

[8] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779-788, Las Vegas, Nevada, Jun 26-Jul 1, 2016.

[9] Joseph Redmon and Ali Farhadi, “YOLO9000: Better, Faster, Stronger,” IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp. 7263-7271, Honolulu, Hawaii, Jul 22-Jul 25 2017.

[10] Joseph Redmon and Ali Farhadi, “Yolov3: An incremental improvement," arXiv:1804.02767, 2018.

[11] Wei Liu, Dragomir Anguelov,Dumitru Erhan,Christian Szegedy,Scott Reed,Cheng-Yang Fu and Alexander C. Berg,” SSD: Single Shot MultiBox Detector,” European Conference on Computer Vision(ECCV),pp. 21-37, Amsterdam, Netherlands, Oct 8-Oct 16, 2016.

[12] Ross Girshick , Jeff Donahue , Trevor Darrell and Jitendra Malik,”Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pp. 580-587, Columbus, Ohio, Jun 24-Jun 27, 2014.

[13] Ross Girshick, “Fast R-CNN,” The IEEE International Conference on Computer Vision (ICCV), pp. 1440-1448, Santiago, Chile, Dec 13-Dec 16, 2015.

[14] Shaoqing Ren, Kaiming He, Ross Girshick and Jian Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” Neural Information Processing Systems(NIPS), Montreal, Canada, Dec 7-Dec 10, 2015.

[15] Sheng-Chieh Chuang, “Picking of Stacked Workpieces by Robot Arm Based on RGB-D images,” Master's thesis of Department of Electrical Engineering, National Chung Cheng Universit, 2019.

[16] Heng Guan, Jiaxin Li, Rui Yan, “An Efficient Robotic Grasping Pipeline Base on Fully Convolutional Neural Network”, International Conference on Control, Automation and Robotics, Beijing, China, Apr 19-Apr 22, 2019.

[17] Ian Lenz, Honglak Lee, Ashutosh Saxena, “Deep Learning for Detecting Robotic Grasps”, International Conference on Learning Representations(ICLR) , Scottsdale, USA, May 2-May 4, 2013.

[18] Structural Analysis and Shape Descriptors, Available online: https://docs.opencv.org/ (accessed on 2020/02/20).

[19] Geometric Image Transformations , Available online: https://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html(accessed on 2020/02/20).

[20] M. Gualtieri, A. ten Pas, K. Saenko, and R. Platt, “High precision grasp pose detection in dense clutter,” in Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016,pp. 598–605.

[21] I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic grasps,” The International Journal of Robotics Research, vol. 34, no.4-5, pp. 705–724, 2015.