Ring Fusion of Fisheye Images Based on Corner Detection ...

10
Research Article Ring Fusion of Fisheye Images Based on Corner Detection Algorithm for Around View Monitoring System of Intelligent Driving Jianhui Zhao, 1,2 Hongbo Gao , 3 Xinyu Zhang , 4 Yinglin Zhang, 5 and Yuchao Liu 1 1 Department of Computer Science and Technology, Tsinghua University, Beijing 100083, China 2 Department of Basic Courses, Army Military Transportation University, Tianjin 300161, China 3 State Key Laboratory of Automotive Safety and Energy, Tsinghua University, Beijing 100083, China 4 Information Technology Center, Tsinghua University, Beijing 100083, China 5 State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, Hunan University, Changsha 410000, China Correspondence should be addressed to Hongbo Gao; [email protected] Received 16 September 2017; Revised 16 December 2017; Accepted 3 January 2018; Published 1 February 2018 Academic Editor: Chenguang Yang Copyright © 2018 Jianhui Zhao et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In order to improve the visual effect of the around view monitor (AVM), we propose a novel ring fusion method to reduce the brightness difference among fisheye images and achieve a smooth transition around stitching seam. Firstly, an integrated corner detection is proposed to automatically detect corner points for image registration. en, we use equalization processing to reduce the brightness among images. And we match the color of images according to the ring fusion method. Finally, we use distance weight to blend images around stitching seam. rough this algorithm, we have made a Matlab toolbox for image blending. 100% of the required corner is accurately and fully automatically detected. e transition around the stitching seam is very smooth, with no obvious stitching trace. 1. Introduction In the past decades, because of the rapid growth of road transportation and private cars, road traffic safety has become an important problem in society [1–3]. e statistics results of the national traffic accidents show that the proportion of accidents caused by drivers’ vision limitation, the reflection of delay, judgment error, and improper operation accounted for up to 40% of the total accidents [3–5]. In order to solve the above problem, the advanced driving assistance system (ADAS) received more and more attention, such as lane departure warning system (LDWS), pro-collision warning system (FCWS), blind spot monitoring (BSD), and around view monitor (AVM) [6, 7]. Among them, AVM is used to provide the driver with the 360-degree video image information around the body, in the parking, and in crowded city traffic conditions to reduce the user’s visual blind area, to help users to better judge the road traffic conditions around the vehicle, to avoid collision with pedestrians and vehicles around, and to make the driving process safer and more convenient [8–10]. e key technology of AVM is fisheye image correction and image fusion. In this paper we focus on image fusion, which includes image registration and blending. ere are three main types of registration methods, which are region matching method, transform domain based method, and feature matching method. Among them, feature based reg- istration method is fast and accurate, and some features are robust to image deformation, illumination change, and noise. It is a common method of image registration. In the document, the overlap points of two correction images are extracted by matching the SFIT features with scale and direction invariance [11, 12]. e feature operators usually used to extract overlapping corner points include Harris [13, 14], Canny [15, 16], and Moravec [17, 18]. ese feature operators are mainly used in the registration process of Hindawi Journal of Robotics Volume 2018, Article ID 9143290, 9 pages https://doi.org/10.1155/2018/9143290

Transcript of Ring Fusion of Fisheye Images Based on Corner Detection ...

Research ArticleRing Fusion of Fisheye Images Based onCorner Detection Algorithm for Around ViewMonitoring System of Intelligent Driving

Jianhui Zhao12 Hongbo Gao 3 Xinyu Zhang 4 Yinglin Zhang5 and Yuchao Liu1

1Department of Computer Science and Technology Tsinghua University Beijing 100083 China2Department of Basic Courses Army Military Transportation University Tianjin 300161 China3State Key Laboratory of Automotive Safety and Energy Tsinghua University Beijing 100083 China4Information Technology Center Tsinghua University Beijing 100083 China5State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body Hunan University Changsha 410000 China

Correspondence should be addressed to Hongbo Gao ghb48mailtsinghuaeducn

Received 16 September 2017 Revised 16 December 2017 Accepted 3 January 2018 Published 1 February 2018

Academic Editor Chenguang Yang

Copyright copy 2018 Jianhui Zhao et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

In order to improve the visual effect of the around view monitor (AVM) we propose a novel ring fusion method to reduce thebrightness difference among fisheye images and achieve a smooth transition around stitching seam Firstly an integrated cornerdetection is proposed to automatically detect corner points for image registration Then we use equalization processing to reducethe brightness among images And we match the color of images according to the ring fusion method Finally we use distanceweight to blend images around stitching seam Through this algorithm we have made a Matlab toolbox for image blending 100of the required corner is accurately and fully automatically detected The transition around the stitching seam is very smooth withno obvious stitching trace

1 Introduction

In the past decades because of the rapid growth of roadtransportation and private cars road traffic safety has becomean important problem in society [1ndash3] The statistics resultsof the national traffic accidents show that the proportion ofaccidents caused by driversrsquo vision limitation the reflectionof delay judgment error and improper operation accountedfor up to 40 of the total accidents [3ndash5] In order tosolve the above problem the advanced driving assistancesystem (ADAS) received more and more attention suchas lane departure warning system (LDWS) pro-collisionwarning system (FCWS) blind spot monitoring (BSD) andaround view monitor (AVM) [6 7] Among them AVM isused to provide the driver with the 360-degree video imageinformation around the body in the parking and in crowdedcity traffic conditions to reduce the userrsquos visual blind area tohelp users to better judge the road traffic conditions around

the vehicle to avoid collision with pedestrians and vehiclesaround and to make the driving process safer and moreconvenient [8ndash10]

The key technology of AVM is fisheye image correctionand image fusion In this paper we focus on image fusionwhich includes image registration and blending There arethree main types of registration methods which are regionmatching method transform domain based method andfeature matching method Among them feature based reg-istration method is fast and accurate and some featuresare robust to image deformation illumination change andnoise It is a common method of image registration Inthe document the overlap points of two correction imagesare extracted by matching the SFIT features with scale anddirection invariance [11 12] The feature operators usuallyused to extract overlapping corner points include Harris[13 14] Canny [15 16] and Moravec [17 18] These featureoperators are mainly used in the registration process of

HindawiJournal of RoboticsVolume 2018 Article ID 9143290 9 pageshttpsdoiorg10115520189143290

2 Journal of Robotics

general image mosaics However in AVM system we matchimages by scene calibration Documents [19 20] detectthe corner points of checkerboard patterns by quadrilateraljoin rules In documents [21 22] the initial corner set isobtained by the improved Hessian corner detector and theinitial concentration of false points is eliminated by theintensity and geometrical features of the chessboard patternBut the above two methods are mainly designed for thechessboard pattern used in camera calibration which do notmeet the requirement of calibration pattern used in scenecalibration Due to the influence of inherent difference ofcamera and the installation position there is a difference ofexposure between the images which leads to the obviousstitching trace Therefore the image blending is necessaryafter registration In the documents the optimal seam issearched by minimizing the mean square variance of pixel inthe region and the adjacent interpolation method is used tosmooth the stitching effect but this method is not suitablefor the scene with too large difference of exposure [23 24]The documents fuse several images captured by the samecamerawith different parameters from the same angle of viewby weighting method but this method can only adjust thebrightness and cannot reduce the color difference [25 26]In the documents seamless stitching is achieved by tonecompensation near the optimal seam but the stitching seamof around view system is fixed so the AVM image cannot befully fused by this method [27 28]

Therefore in order to fully consider the needs ofscene calibration we propose an integrated corner detectionmethod to automatically detect all corners in the process ofimage registration In order to fully consider the influence ofinherent difference of camera and the installation positionwe propose ring fusion method to blend images from 4fisheye cameras The main contribution of this paper lies inthe following aspects (1) limitation condition of minimumarea and shape are used to remove redundant contours forcorner detection successfully We also improve the cornerpositions extraction accuracy by detecting corners in fisheyeimages at first and calculating the corresponding position incorrected image (2) The color matching method and ringshape scheme are used in image blending for a more smoothtransition successfully which makes it possible to seamlesslyfuse images with large differences in exposure (3) A Matlabtoolbox for image blending in AVM system is designed

The rest of this paper is organized as follows Sec-tion 2 introduces AVM architecture Section 3 describes themethodology of image registration and blending in detailSection 4 describes the experiment result of our methodConclusions are offered in Section 5

2 AVM Architecture

The algorithm flow of the AVM is shown in Figure 1Firstly we input fisheye images of the calibration scene anddetect corner points Then positions of these corner pointsin corrected images are calculated by a correction modelMeanwhile we use Look Up Table (LUT) to correct fisheyeimages and obtain the corrected images Secondly the targetpositions of corner points in output image are calculated

Compute LUT

Correctionimage Compute H

Imageregistration

Image blending

Corner position incorrection image

Compute targetposition in AVMimage

Input sceneimage

Measuresize data

Extract corner

End

Model

Figure 1 Flow chart of image fusion

by size data in calibration scene Then positions of cornerpoints in corrected images and their target positions areused to compute homography matrix H Finally we projectcorrected images into the coordinate of output image by usinghomography matrix H Then we use ring fusion method toblend them which is the emphasis of this paper

In our experiment the Volkswagen Magotan has beenused The length of the vehicle is 48m and the width is18m We use fisheye camera with a 180-degree large viewangle with a focal length of 232mm 4 fisheye cameras aremounted on the front back left and right sides of the vehicleseparately The size of the image captured by fisheye camerais 720lowast576The size of AVMoutput image is 1280lowast720Thispaper develops the proposed method on a PC The adoptedsimulation processor is the Intel(R) Core(TM) i7-6700HQCPU at 260GHz and the simulation software is MATLAB

3 Methodology

31 Scene Calibration Calibration scene is set up for imageregistration in next step The distance between vehicle bodyand calibration pattern in the front and rear positions is 30 cmand the right and left is 0 cm The reference point of frontpattern is A and the rear is F We made the point A collinearwith the left vehicle body andF collinearwith the right vehiclebody And there are 12 point positions we need in every viewangle as shown in Figure 2

The size data which need to be measured include thefollowing

(1) Car length the line length of AE(2) Car width the line length of AB(3) Offset the line length of AC or the line length of BD

Journal of Robotics 3

AB

CD

EF

1

CD

EF

OX

Y

Figure 2 The illustration of calibration scene

After themeasurement of size data the target positions ofcorner points in the coordinate of output image are calculatedby the following parameters the size data measured abovethe size of output image defined by users and the size ofcalibration pattern and vehicleThe calculation process of thetarget position of all points is the same We take the targetposition of point 1 (as shown in Figure 1) as an exampleFirstly we calculate the position of point 1 in the calibrationscene as shown in

1199091 = minus12119882 minus 119908119908 minus 11990811988711199101 = minus12119871 minus 119908119908 minus 1199081198871

(1)

where the origin of calibration scene is located at the centerof the calibration scene as shown in Figure 1 (1199091 1199101) denotesthe position of point 1119882denotes the vehicle width119871 denotesthe vehicle length 119908119908 is the white edge width and 1199081198871 is thewidth of big black box

Secondly we use the position in calibration scene tocalculate the position in the coordinate of output image asshown in

scale = 119882img119882real

1199061 = scale lowast 1199091 + 119882img2V1 = scale lowast 1199102 + 119871 img2

(2)

where scale denotes the scaling factor from calibration sceneto coordinate of output image 119882img denotes the width ofoutput image 119882real denotes the width of calibration scene

(1199061 V1) denotes the position in calibration of output image(1199091 1199101) denotes the position in calibration scene

32 Image Registration Based on Corner Detection

321 Detect and Calculate the Corner Point Positions Firstlythe corners are detected automatically in the fisheye imageby the integrated corner detection method Secondly thecorresponding positions of these corners in the correctedimage are calculated using the correction model Finallywe save the positions in the corrected image for the nextcomputation of homography matrix

Algorithm steps of integrated corner detection methodare as follows

(1) Input fisheye images of calibration scene from all 4cameras

(2) Use the Rufli corner detection method to detect thecorners in the chessboard array

(3) Based on the relative position between the black boxand the chessboard array use the detected cornersfrom step (2) to obtain the Region Of Interest (ROI)of the big black box

(4) Preprocess the ROI by adaptive binarization usingldquoadaptivethresholdrdquo function in Opencv and a mor-phological closing operation to denoise

(5) Obtain the contour of big black box fromROI and thepositions of contour vertex by ldquofindContoursrdquo func-tion in OpencvThen we use the following method toremove redundant contours

(1) Limit theminimum area of the contour accord-ing to the size ratio of the chessboard array tothe big black box and their relative positions

4 Journal of Robotics

Coordinate ofoutput image

Coordinate ofcorrected image

Xc1

Oc1

Zc1

Yc1

Xc2

Oc2

Zc2

Yc2Xc3

Oc3

Zc3

Yc3

Xc4

Oc4

Zc4

Yc4

XOYO

ZO

OO

Figure 3 The illustration of image registration and coordinate unification

the threshold of minimum area is calculated asshown in

119887 areamin = 15 lowast 119888119887areaavg (front and rear view)01 lowast 119888119887areaavg (left and right view) (3)

where 119887 areamin denotes the threshold of bigblack box area and 119888119887areaavg denotes the averagearea of small box in chessboard array

(2) Limit contour shape according to the locationof the big black box and the imaging features offisheye camera the big black box should be in afixed shape The shape restrictions are shown in

1198891 ge 015 lowast 119901 ampamp 1198892 ge 015 lowast 1199018 lowast 1198893 ge 1198894 ampamp lowast 8 lowast 1198894 ge 1198893

areacontour gt 03 lowast arearect(4)

where 1198891 and 1198892 denote the diagonal lengthof the contour 1198893 and 1198894 denote the length ofadjacent side of contour 119901 denotes perimeter ofcontour areacontour denotes the area of contourand arearect denotes the area of envelope ofcontour

(6) Use the SUSAN method to locate the exact positionsof the contour vertex around positions obtained fromstep (5)

322 Image Registration and Coordinate Unification Afterthe calibration of Figure 3 and corner detection the cornerpositions in coordinates of corrected images and their targetpositions in coordinates of output images are obtained Then

we need to unify the coordinate of 4 corrected images into thecoordinate of output image as shown in Figure 3The specificprocess is as follows Firstly we calculate the homographytransform matrix as shown in (5) The form of this matrixis shown in (6) 119875119887 = 119867 lowast 119875119906 (5)

119867 = [[[11988611 11988612 1198861311988621 11988622 1198862311988631 11988632 11988633

]]] (6)

where 119875119906 = [119909119906 119910119906 1]119879 denotes the corner position in thecoordinate of corrected image 119875119887 = [119909119887 119910119887 1]119879 denotes thetarget position in the coordinate of output image

Secondly we project every pixel of 4 corrected images intothe coordinate of output image as shown in119908 = 11988631119909119888 + 11988632119910119888 + 11988633

119909119900 = 11988611119909119888 + 11988612119910119888 + 11988613119908119910119900 = 11988621119909119888 + 11988622119910119888 + 11988623119908

(7)

where (119909119888 119910119888) denotes the pixel position in the coordinateof corrected image (119909119900 119910119900) denotes the pixel position in thecoordinate of output image

33 Image Blending As the corrected images from 4 camerasare different from each other in brightness saturation andcolor we blend them to improve the visual effect of outputimage by using ring fusion method

The detailed process is shown as follows

(1) Equalization Preprocessing The ldquoimadjustrdquo function inMatlab is used for equalization preprocessing to reduce

Journal of Robotics 5

(a) Original image (b) After imadjust (c) After ring color matching (d) After interpolation

Figure 4 The image blending result

the brightness difference among images For example theoriginal image of left view angle is shown in Figure 4(a) andthe processing result is Figure 4(b)

(2) Ring Color Matching

Step 1 (spatial transformation) As RGB space has a strongcorrelation it is not suitable for image color processing Sowetransform RGB space to the l120572120573 space where the correlationbetween three channels is the smallest The space conversionprocess includes three transformations namely 119877119866119861 rarr119862119868119864 119883119884119885 rarr 119871119872119878 rarr 119897120572120573

Firstly from RGB space to CIE XYZ space one has

[[[119883119884119885]]]= [[[

05141 03239 0160402651 06702 0064100241 01228 08444]]][[[119877119866119861]]] (8)

Secondly from CIE XYZ space to LMS space one has

[[[119871119872119878]]]= [[[

03897 06890 minus00787minus02298 11834 004640 0 1]]][[[119883119884119885]]] (9)

Since the data are scattered in the LMS space it isfurther converted to a logarithmic space with a base of 10as shown in (10) This makes the data distribution not onlymore converging but also in line with the results of thepsychological and physical research of human feeling forcolor 119871 = log 119871

119872 = log119872119878 = log 119878

(10)

Finally from LMS space to l120572120573 space one has (11) Thistransformation is based on the principal component analysis(PCA) of the data where l is the first principal component120572 is the second principal component and 120573 is the thirdprincipal component

[[[119897120572120573]]]= [[[[[[[

1radic3 0 00 1radic6 00 0 1radic2

]]]]]]][[[1 1 11 1 minus21 minus1 0

]]][[[119871119878119872]]] (11)

After the above three steps the conversion from RGB tol120572120573 space is completed

Step 2 (color registration) Firstly the mean and standarddeviations of every channel in l120572120573 space are calculatedaccording to

120583 = 1119873119873sum119894=1

V119894

120590 = radic 1119873119873sum119894=1

(V119894 minus 120583)2(12)

where 120583 denotes the mean value119873 denotes the total numberof pixels V119894 denotes the value of the pixel 119894 and 120590 indicatesthe standard deviation

Secondly the color matching factors are calculatedaccording to

119891119897 = 120590119897V1120590119897V2

6 Journal of Robotics

119891120572 = 120590120572V1120590120572V2119891120573 = 120590120573V1120590120573V2

(13)

where119891119897 denotes the factor thatmatches the color of V2 imageto V1 in channel 119897 120590119897V1 denotes the variance of V1 image inchannel 119897 120590119897V2 denotes the variance of V2 image in channel lAnd the rest is similar

Finally we match the color of images as shown in

1198971015840V2 = 119891119897 lowast (119897V1 minus 119897V1) + 119897V21205721015840V2 = 119891120572 lowast (120572V1 minus 120572V1) + 120572V21205731015840V2 = 119891120573 lowast (120573V1 minus 120573V1) + 120573V2

(14)

where 1198971015840V2 denotes pixel value of image V2 after color matchingin channel 119897 119891119897 denotes the factor of color matching inchannel 119897 119897V1 denotes pixel value of image V1 in channel 119897119897V1 denotes average pixel value of image V1 in channel 119897 119897V2denotes average pixel value of image V2 in channel 119897 And therest is similar

Step 3 (global optimization) Then we match the color ofimages from 4 cameras anticlockwise as follows to reach aglobal optimization result Firstly we match the colors of 1198814to 1198813 then 1198813 to 1198812 then 1198812 to 1198811 and finally1198814 to 1198811 whichforms a ring shape as shown in Figure 5Theprocessing resultof left view is shown in Figure 4(c)

(3)Weighted Blending After colormatching the visual effectsof output image have been greatly improved But around thestitching seam between different corrected images the visualeffect is still not enough Therefore we use (15) to ensuresmooth transition The interpolation result of left view angleimage is shown in Figure 4(d)

119874 (119894 119895) = 1198811 (119894 119895) times 119889119889max+ 1198812 (119894 119895) times (1 minus 119889119889max

) 0 lt 119889 lt 119889max

(15)

where119874(119894 119895) denotes the pixel value in output image and (119894 119895)is the position index of pixel 1198811(119894 119895) and 1198812(119894 119895) denote thecorresponding pixel value in corrected images 1198811 and 1198812 119889denotes the distance from pixel to the seam 119889max denotes thewidth of transition field as shown in Figure 5

4 Experiment Result

Some details of the experiment have been provided in part2 of this paper So in this part we only introduce the resultThe fisheye images captured from 4 cameras are shownin Figure 6 And their corresponding corrected images areshown in Figure 7 The corner detection and calculationresult are shown in Figure 8 where Figure 8(a) shows

V1

V2

V3

V4

dGR

Figure 5 The illustration of ring color matching method

Table 1 Comparison of different corner detection algorithms

Method CcN

CbN

Cc + CbN

Rufli [19] 75 0 75Harris [13] 693 165 859Tian [7] 1683 263 1946Integrated corner detection method 75 25 100

the corner positions detected in the distortion image andFigure 8(b) shows the corresponding positions calculatedin the corrected image The integrated corner detectionalgorithm is compared with several other corner detectionalgorithms in Table 1

In Table 1 Cc denotes the number of corner pointsdetected correctly in chessboard Cb denotes the number ofcorner points detected correctly in big black box N denotesall the number of corner points detected in the calibrationscene The Rufli method cannot detect vertices of the bigblack boxTheHarris and Shi-Tomasi methods cannot detectall target vertices and generate a lot of corner redundancyAnd the integrated corner detection algorithm can accuratelyextract all the target corner points of calibration pattern in thescene As a result the integrated corner detection algorithmproposed by us is effective

The output image result is shown in Figure 9 Figure 9(a)is the result before image blending and Figure 9(b) is theresult after image blending The experimental results showthat the proposed algorithm has visual effect around thestitching seam which proves that our ring fusion method iseffective

5 Conclusion

This paper has proposed a ring fusion method to obtain abetter visual effect of AVM system for intelligent drivingTo achieve this condition an integrated corner detectionmethod of image registration and a ring shape scheme forimage blending have been presented Experiment resultsprove that this designed approach is satisfactory 100 of the

Journal of Robotics 7

(a) Front view (b) Right view

(c) Back view (d) Left view

Figure 6 Fisheye images from each camera

(a) Front view (b) Right view

(c) Back view (d) Left view

Figure 7 Corresponding corrected images

8 Journal of Robotics

(a) Corner positions in distorted image (b) Corresponding positions in undistorted image

Figure 8 The corner detection and calculation result

(a) Before fusion (b) After fusion

Figure 9 Stitched bird view image of AVM

required corner is accurately and fully automatically detectedThe transition around the fusion seam is smooth with noobvious stitching trace However the images we processedin this experiment are static So in the future work we willtransplant this algorithm to development board for dynamicreal-time testing and try to apply the ring fusion method tomore other occasions

Conflicts of Interest

The authors declare no conflicts of interest

Acknowledgments

This work was supported by the National High TechnologyResearch and Development Program (ldquo973rdquo Program) ofChina under Grant no 2016YFB0100903 Beijing MunicipalScience and Technology Commission special major under

Grant nos D171100005017002 and D171100005117002 theNational Natural Science Foundation of China under Grantno U1664263 Junior Fellowships for Advanced InnovationThink-Tank Program of China Association for Science andTechnology under Grant no DXB-ZKQN-2017-035 and theproject funded by China Postdoctoral Science Foundationunder Grant no 2017M620765

References

[1] C Guo J Meguro Y Kojima and T Naito ldquoA MultimodalADAS System for Unmarked Urban Scenarios Based onRoad Context Understandingrdquo IEEE Transactions on IntelligentTransportation Systems vol 16 no 4 pp 1690ndash1704 2015

[2] A Pandey and U C Pati ldquoDevelopment of saliency-basedseamless image compositing using hybrid blending (SSICHB)rdquoIET Image Processing vol 11 no 6 pp 433ndash442 2017

[3] Ministry of Public Security TrafficAdministration Peoplersquos Repub-lic of China Road Traffic accident statistic annual report Jiangsu

Journal of Robotics 9

ProvinceWuxiMinistry of Public Security TrafficManagementScience Research Institute 2011

[4] S Lee S J Lee J Park and H J Kim ldquoExposure correction andimage blending for planar panorama stitchingrdquo in Proceedingsof the 16th International Conference on Control Automation andSystems ICCAS 2016 pp 128ndash131 kor October 2016

[5] H Ma M Wang M Fu and C Yang ldquoA New Discrete-timeGuidance Law Base on Trajectory Learning and Predictionrdquoin Proceedings of the AIAA Guidance Navigation and ControlConference Minneapolis Minnesota

[6] C-L Su C-J Lee M-S Li and K-P Chen ldquo3D AVMsystem for automotive applicationsrdquo in Proceedings of the 10thInternational Conference on Information Communications andSignal Processing ICICS 2015 Singapore December 2015

[7] F Tian and P Shi ldquoImage Mosaic using ORB descriptor andimproved blending algorithmrdquo in Proceedings of the 2014 7thInternational Congress on Image and Signal Processing CISP2014 pp 693ndash698 China October 2014

[8] S M Santhanam V Balisavira S H Roh and V K PandeyldquoLens distortion correction and geometrical alignment forAround View Monitoring systemrdquo in Proceedings of the 18thIEEE International Symposium on Consumer Electronics ISCE2014 Republic of Korea June 2014

[9] D Suru and S Karamchandani ldquoImage fusion in variableraster media for enhancement of graphic device interfacerdquo inProceedings of the 1st International Conference on ComputingCommunication Control and Automation ICCUBEA 2015 pp733ndash736 India February 2015

[10] C Yang H Ma B Xu and M Fu ldquoAdaptive control withnearest-neighbor previous instant compensation for discrete-time nonlinear strict-feedback systemsrdquo in Proceedings of the2012 American Control Conference ACC 2012 pp 1913ndash1918can June 2012

[11] Z Jiang J Wu D Cui et al ldquoStitching Method for DistortedImage Based on SIFT Feature Matchingrdquo in Proceedings ofthe International Conference on Computing and NetworkingTechnology pp 107ndash110 2013

[12] EMUpadhyay andNK Rana ldquoExposure fusion for concealedweapon detectionrdquo in Proceedings of the 2014 2nd InternationalConference on Devices Circuits and Systems ICDCS 2014 IndiaMarch 2014

[13] I Sipiran and B Bustos ldquoHarris 3D a robust extension of theHarris operator for interest point detection on 3D meshesrdquoTheVisual Computer vol 27 no 11 pp 963ndash976 2011

[14] Y Zhao and D Xu ldquoFast image blending using seeded regiongrowingrdquo Communications in Computer and Information Sci-ence vol 525 pp 408ndash415 2015

[15] Y-K Huo G Wei Y-D Zhang and L-N Wu ldquoAn adaptivethreshold for theCannyOperator of edge detectionrdquo inProceed-ings of the 2nd International Conference on Image Analysis andSignal Processing IASPrsquo2010 pp 371ndash374 China April 2010

[16] G Peljor and T Kondo ldquoA saturation-based image fusionmethod for static scenesrdquo in Proceedings of the 6th InternationalConference on Information and Communication Technology forEmbedded Systems IC-ICTES 2015 Thailand March 2015

[17] L Jiang J Liu D Li and Z Zhu ldquo3D point sets matchingmethod based on moravec vertical interest operatorrdquo Advancesin Intelligent and Soft Computing vol 144 no 1 pp 53ndash59 2012

[18] J Lang ldquoColor image encryption based on color blend andchaos permutation in the reality-preservingmultiple-parameterfractional Fourier transform domainrdquo Optics Communicationsvol 338 pp 181ndash192 2015

[19] M Rufli D Scaramuzza and R Siegwart ldquoAutomatic detectionof checkerboards on blurred and distorted imagesrdquo in Proceed-ings of the 2008 IEEERSJ International Conference on IntelligentRobots and Systems IROS pp 3121ndash3126 France September2008

[20] J-E Scholtz K Husers M Kaup et al ldquoNon-linear imageblending improves visualization of head and neck primarysquamous cell carcinoma compared to linear blending in dual-energy CTrdquo Clinical Radiology vol 70 no 2 pp 168ndash175 2015

[21] Y Liu S Liu Y Cao and Z Wang ldquoAutomatic chessboardcorner detection methodrdquo IET Image Processing vol 10 no 1pp 16ndash23 2016

[22] Y Zhang S Deng Z Liu and Y Wang ldquoAesthetic QR CodesBased on Two-Stage Image Blendingrdquo inMultiMedia Modelingvol 8936 of Lecture Notes in Computer Science pp 183ndash194Springer International Publishing Cham 2015

[23] K Pulli M Tico and Y Xiong ldquoMobile panoramic imagingsystemrdquo in Proceedings of the 2010 IEEE Computer SocietyConference on Computer Vision and Pattern Recognition -Workshops CVPRW 2010 pp 108ndash115 USA June 2010

[24] X Zhang H Gao M Guo G Li Y Liu and D Li ldquoA study onkey technologies of unmanned drivingrdquo CAAI Transactions onIntelligence Technology vol 1 no 1 pp 4ndash13 2016

[25] Y Tang and J Shin ldquoImage Stitching with Efficient BrightnessFusion andAutomatic ContentAwarenessrdquo inProceedings of theInternational Conference on Signal Processing and MultimediaApplications pp 60ndash66 Vienna Austria August 2014

[26] H B Gao X Y Zhang T L Zhang Y C Liu and D Y LildquoResearch of intelligent vehicle variable granularity evaluationbased on cloud modelrdquo Acta Electronica Sinica vol 44 no 2pp 365ndash374 2016

[27] J-H Cha Y-S Jeon Y-S Moon and S-H Lee ldquoSeamless andfast panoramic image stitchingrdquo in Proceedings of the 2012 IEEEInternational Conference on Consumer Electronics ICCE 2012pp 29-30 USA January 2012

[28] J Liu H Ma X Ren and M Fu ldquoOptimal formation of robotsby convex hull and particle swarm optimizationrdquo in Proceedingsof the 2013 3rd IEEE Symposium on Computational Intelligencein Control and Automation CICA 2013 - 2013 IEEE SymposiumSeries on Computational Intelligence SSCI 2013 pp 104ndash111Singapore April 2013

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

2 Journal of Robotics

general image mosaics However in AVM system we matchimages by scene calibration Documents [19 20] detectthe corner points of checkerboard patterns by quadrilateraljoin rules In documents [21 22] the initial corner set isobtained by the improved Hessian corner detector and theinitial concentration of false points is eliminated by theintensity and geometrical features of the chessboard patternBut the above two methods are mainly designed for thechessboard pattern used in camera calibration which do notmeet the requirement of calibration pattern used in scenecalibration Due to the influence of inherent difference ofcamera and the installation position there is a difference ofexposure between the images which leads to the obviousstitching trace Therefore the image blending is necessaryafter registration In the documents the optimal seam issearched by minimizing the mean square variance of pixel inthe region and the adjacent interpolation method is used tosmooth the stitching effect but this method is not suitablefor the scene with too large difference of exposure [23 24]The documents fuse several images captured by the samecamerawith different parameters from the same angle of viewby weighting method but this method can only adjust thebrightness and cannot reduce the color difference [25 26]In the documents seamless stitching is achieved by tonecompensation near the optimal seam but the stitching seamof around view system is fixed so the AVM image cannot befully fused by this method [27 28]

Therefore in order to fully consider the needs ofscene calibration we propose an integrated corner detectionmethod to automatically detect all corners in the process ofimage registration In order to fully consider the influence ofinherent difference of camera and the installation positionwe propose ring fusion method to blend images from 4fisheye cameras The main contribution of this paper lies inthe following aspects (1) limitation condition of minimumarea and shape are used to remove redundant contours forcorner detection successfully We also improve the cornerpositions extraction accuracy by detecting corners in fisheyeimages at first and calculating the corresponding position incorrected image (2) The color matching method and ringshape scheme are used in image blending for a more smoothtransition successfully which makes it possible to seamlesslyfuse images with large differences in exposure (3) A Matlabtoolbox for image blending in AVM system is designed

The rest of this paper is organized as follows Sec-tion 2 introduces AVM architecture Section 3 describes themethodology of image registration and blending in detailSection 4 describes the experiment result of our methodConclusions are offered in Section 5

2 AVM Architecture

The algorithm flow of the AVM is shown in Figure 1Firstly we input fisheye images of the calibration scene anddetect corner points Then positions of these corner pointsin corrected images are calculated by a correction modelMeanwhile we use Look Up Table (LUT) to correct fisheyeimages and obtain the corrected images Secondly the targetpositions of corner points in output image are calculated

Compute LUT

Correctionimage Compute H

Imageregistration

Image blending

Corner position incorrection image

Compute targetposition in AVMimage

Input sceneimage

Measuresize data

Extract corner

End

Model

Figure 1 Flow chart of image fusion

by size data in calibration scene Then positions of cornerpoints in corrected images and their target positions areused to compute homography matrix H Finally we projectcorrected images into the coordinate of output image by usinghomography matrix H Then we use ring fusion method toblend them which is the emphasis of this paper

In our experiment the Volkswagen Magotan has beenused The length of the vehicle is 48m and the width is18m We use fisheye camera with a 180-degree large viewangle with a focal length of 232mm 4 fisheye cameras aremounted on the front back left and right sides of the vehicleseparately The size of the image captured by fisheye camerais 720lowast576The size of AVMoutput image is 1280lowast720Thispaper develops the proposed method on a PC The adoptedsimulation processor is the Intel(R) Core(TM) i7-6700HQCPU at 260GHz and the simulation software is MATLAB

3 Methodology

31 Scene Calibration Calibration scene is set up for imageregistration in next step The distance between vehicle bodyand calibration pattern in the front and rear positions is 30 cmand the right and left is 0 cm The reference point of frontpattern is A and the rear is F We made the point A collinearwith the left vehicle body andF collinearwith the right vehiclebody And there are 12 point positions we need in every viewangle as shown in Figure 2

The size data which need to be measured include thefollowing

(1) Car length the line length of AE(2) Car width the line length of AB(3) Offset the line length of AC or the line length of BD

Journal of Robotics 3

AB

CD

EF

1

CD

EF

OX

Y

Figure 2 The illustration of calibration scene

After themeasurement of size data the target positions ofcorner points in the coordinate of output image are calculatedby the following parameters the size data measured abovethe size of output image defined by users and the size ofcalibration pattern and vehicleThe calculation process of thetarget position of all points is the same We take the targetposition of point 1 (as shown in Figure 1) as an exampleFirstly we calculate the position of point 1 in the calibrationscene as shown in

1199091 = minus12119882 minus 119908119908 minus 11990811988711199101 = minus12119871 minus 119908119908 minus 1199081198871

(1)

where the origin of calibration scene is located at the centerof the calibration scene as shown in Figure 1 (1199091 1199101) denotesthe position of point 1119882denotes the vehicle width119871 denotesthe vehicle length 119908119908 is the white edge width and 1199081198871 is thewidth of big black box

Secondly we use the position in calibration scene tocalculate the position in the coordinate of output image asshown in

scale = 119882img119882real

1199061 = scale lowast 1199091 + 119882img2V1 = scale lowast 1199102 + 119871 img2

(2)

where scale denotes the scaling factor from calibration sceneto coordinate of output image 119882img denotes the width ofoutput image 119882real denotes the width of calibration scene

(1199061 V1) denotes the position in calibration of output image(1199091 1199101) denotes the position in calibration scene

32 Image Registration Based on Corner Detection

321 Detect and Calculate the Corner Point Positions Firstlythe corners are detected automatically in the fisheye imageby the integrated corner detection method Secondly thecorresponding positions of these corners in the correctedimage are calculated using the correction model Finallywe save the positions in the corrected image for the nextcomputation of homography matrix

Algorithm steps of integrated corner detection methodare as follows

(1) Input fisheye images of calibration scene from all 4cameras

(2) Use the Rufli corner detection method to detect thecorners in the chessboard array

(3) Based on the relative position between the black boxand the chessboard array use the detected cornersfrom step (2) to obtain the Region Of Interest (ROI)of the big black box

(4) Preprocess the ROI by adaptive binarization usingldquoadaptivethresholdrdquo function in Opencv and a mor-phological closing operation to denoise

(5) Obtain the contour of big black box fromROI and thepositions of contour vertex by ldquofindContoursrdquo func-tion in OpencvThen we use the following method toremove redundant contours

(1) Limit theminimum area of the contour accord-ing to the size ratio of the chessboard array tothe big black box and their relative positions

4 Journal of Robotics

Coordinate ofoutput image

Coordinate ofcorrected image

Xc1

Oc1

Zc1

Yc1

Xc2

Oc2

Zc2

Yc2Xc3

Oc3

Zc3

Yc3

Xc4

Oc4

Zc4

Yc4

XOYO

ZO

OO

Figure 3 The illustration of image registration and coordinate unification

the threshold of minimum area is calculated asshown in

119887 areamin = 15 lowast 119888119887areaavg (front and rear view)01 lowast 119888119887areaavg (left and right view) (3)

where 119887 areamin denotes the threshold of bigblack box area and 119888119887areaavg denotes the averagearea of small box in chessboard array

(2) Limit contour shape according to the locationof the big black box and the imaging features offisheye camera the big black box should be in afixed shape The shape restrictions are shown in

1198891 ge 015 lowast 119901 ampamp 1198892 ge 015 lowast 1199018 lowast 1198893 ge 1198894 ampamp lowast 8 lowast 1198894 ge 1198893

areacontour gt 03 lowast arearect(4)

where 1198891 and 1198892 denote the diagonal lengthof the contour 1198893 and 1198894 denote the length ofadjacent side of contour 119901 denotes perimeter ofcontour areacontour denotes the area of contourand arearect denotes the area of envelope ofcontour

(6) Use the SUSAN method to locate the exact positionsof the contour vertex around positions obtained fromstep (5)

322 Image Registration and Coordinate Unification Afterthe calibration of Figure 3 and corner detection the cornerpositions in coordinates of corrected images and their targetpositions in coordinates of output images are obtained Then

we need to unify the coordinate of 4 corrected images into thecoordinate of output image as shown in Figure 3The specificprocess is as follows Firstly we calculate the homographytransform matrix as shown in (5) The form of this matrixis shown in (6) 119875119887 = 119867 lowast 119875119906 (5)

119867 = [[[11988611 11988612 1198861311988621 11988622 1198862311988631 11988632 11988633

]]] (6)

where 119875119906 = [119909119906 119910119906 1]119879 denotes the corner position in thecoordinate of corrected image 119875119887 = [119909119887 119910119887 1]119879 denotes thetarget position in the coordinate of output image

Secondly we project every pixel of 4 corrected images intothe coordinate of output image as shown in119908 = 11988631119909119888 + 11988632119910119888 + 11988633

119909119900 = 11988611119909119888 + 11988612119910119888 + 11988613119908119910119900 = 11988621119909119888 + 11988622119910119888 + 11988623119908

(7)

where (119909119888 119910119888) denotes the pixel position in the coordinateof corrected image (119909119900 119910119900) denotes the pixel position in thecoordinate of output image

33 Image Blending As the corrected images from 4 camerasare different from each other in brightness saturation andcolor we blend them to improve the visual effect of outputimage by using ring fusion method

The detailed process is shown as follows

(1) Equalization Preprocessing The ldquoimadjustrdquo function inMatlab is used for equalization preprocessing to reduce

Journal of Robotics 5

(a) Original image (b) After imadjust (c) After ring color matching (d) After interpolation

Figure 4 The image blending result

the brightness difference among images For example theoriginal image of left view angle is shown in Figure 4(a) andthe processing result is Figure 4(b)

(2) Ring Color Matching

Step 1 (spatial transformation) As RGB space has a strongcorrelation it is not suitable for image color processing Sowetransform RGB space to the l120572120573 space where the correlationbetween three channels is the smallest The space conversionprocess includes three transformations namely 119877119866119861 rarr119862119868119864 119883119884119885 rarr 119871119872119878 rarr 119897120572120573

Firstly from RGB space to CIE XYZ space one has

[[[119883119884119885]]]= [[[

05141 03239 0160402651 06702 0064100241 01228 08444]]][[[119877119866119861]]] (8)

Secondly from CIE XYZ space to LMS space one has

[[[119871119872119878]]]= [[[

03897 06890 minus00787minus02298 11834 004640 0 1]]][[[119883119884119885]]] (9)

Since the data are scattered in the LMS space it isfurther converted to a logarithmic space with a base of 10as shown in (10) This makes the data distribution not onlymore converging but also in line with the results of thepsychological and physical research of human feeling forcolor 119871 = log 119871

119872 = log119872119878 = log 119878

(10)

Finally from LMS space to l120572120573 space one has (11) Thistransformation is based on the principal component analysis(PCA) of the data where l is the first principal component120572 is the second principal component and 120573 is the thirdprincipal component

[[[119897120572120573]]]= [[[[[[[

1radic3 0 00 1radic6 00 0 1radic2

]]]]]]][[[1 1 11 1 minus21 minus1 0

]]][[[119871119878119872]]] (11)

After the above three steps the conversion from RGB tol120572120573 space is completed

Step 2 (color registration) Firstly the mean and standarddeviations of every channel in l120572120573 space are calculatedaccording to

120583 = 1119873119873sum119894=1

V119894

120590 = radic 1119873119873sum119894=1

(V119894 minus 120583)2(12)

where 120583 denotes the mean value119873 denotes the total numberof pixels V119894 denotes the value of the pixel 119894 and 120590 indicatesthe standard deviation

Secondly the color matching factors are calculatedaccording to

119891119897 = 120590119897V1120590119897V2

6 Journal of Robotics

119891120572 = 120590120572V1120590120572V2119891120573 = 120590120573V1120590120573V2

(13)

where119891119897 denotes the factor thatmatches the color of V2 imageto V1 in channel 119897 120590119897V1 denotes the variance of V1 image inchannel 119897 120590119897V2 denotes the variance of V2 image in channel lAnd the rest is similar

Finally we match the color of images as shown in

1198971015840V2 = 119891119897 lowast (119897V1 minus 119897V1) + 119897V21205721015840V2 = 119891120572 lowast (120572V1 minus 120572V1) + 120572V21205731015840V2 = 119891120573 lowast (120573V1 minus 120573V1) + 120573V2

(14)

where 1198971015840V2 denotes pixel value of image V2 after color matchingin channel 119897 119891119897 denotes the factor of color matching inchannel 119897 119897V1 denotes pixel value of image V1 in channel 119897119897V1 denotes average pixel value of image V1 in channel 119897 119897V2denotes average pixel value of image V2 in channel 119897 And therest is similar

Step 3 (global optimization) Then we match the color ofimages from 4 cameras anticlockwise as follows to reach aglobal optimization result Firstly we match the colors of 1198814to 1198813 then 1198813 to 1198812 then 1198812 to 1198811 and finally1198814 to 1198811 whichforms a ring shape as shown in Figure 5Theprocessing resultof left view is shown in Figure 4(c)

(3)Weighted Blending After colormatching the visual effectsof output image have been greatly improved But around thestitching seam between different corrected images the visualeffect is still not enough Therefore we use (15) to ensuresmooth transition The interpolation result of left view angleimage is shown in Figure 4(d)

119874 (119894 119895) = 1198811 (119894 119895) times 119889119889max+ 1198812 (119894 119895) times (1 minus 119889119889max

) 0 lt 119889 lt 119889max

(15)

where119874(119894 119895) denotes the pixel value in output image and (119894 119895)is the position index of pixel 1198811(119894 119895) and 1198812(119894 119895) denote thecorresponding pixel value in corrected images 1198811 and 1198812 119889denotes the distance from pixel to the seam 119889max denotes thewidth of transition field as shown in Figure 5

4 Experiment Result

Some details of the experiment have been provided in part2 of this paper So in this part we only introduce the resultThe fisheye images captured from 4 cameras are shownin Figure 6 And their corresponding corrected images areshown in Figure 7 The corner detection and calculationresult are shown in Figure 8 where Figure 8(a) shows

V1

V2

V3

V4

dGR

Figure 5 The illustration of ring color matching method

Table 1 Comparison of different corner detection algorithms

Method CcN

CbN

Cc + CbN

Rufli [19] 75 0 75Harris [13] 693 165 859Tian [7] 1683 263 1946Integrated corner detection method 75 25 100

the corner positions detected in the distortion image andFigure 8(b) shows the corresponding positions calculatedin the corrected image The integrated corner detectionalgorithm is compared with several other corner detectionalgorithms in Table 1

In Table 1 Cc denotes the number of corner pointsdetected correctly in chessboard Cb denotes the number ofcorner points detected correctly in big black box N denotesall the number of corner points detected in the calibrationscene The Rufli method cannot detect vertices of the bigblack boxTheHarris and Shi-Tomasi methods cannot detectall target vertices and generate a lot of corner redundancyAnd the integrated corner detection algorithm can accuratelyextract all the target corner points of calibration pattern in thescene As a result the integrated corner detection algorithmproposed by us is effective

The output image result is shown in Figure 9 Figure 9(a)is the result before image blending and Figure 9(b) is theresult after image blending The experimental results showthat the proposed algorithm has visual effect around thestitching seam which proves that our ring fusion method iseffective

5 Conclusion

This paper has proposed a ring fusion method to obtain abetter visual effect of AVM system for intelligent drivingTo achieve this condition an integrated corner detectionmethod of image registration and a ring shape scheme forimage blending have been presented Experiment resultsprove that this designed approach is satisfactory 100 of the

Journal of Robotics 7

(a) Front view (b) Right view

(c) Back view (d) Left view

Figure 6 Fisheye images from each camera

(a) Front view (b) Right view

(c) Back view (d) Left view

Figure 7 Corresponding corrected images

8 Journal of Robotics

(a) Corner positions in distorted image (b) Corresponding positions in undistorted image

Figure 8 The corner detection and calculation result

(a) Before fusion (b) After fusion

Figure 9 Stitched bird view image of AVM

required corner is accurately and fully automatically detectedThe transition around the fusion seam is smooth with noobvious stitching trace However the images we processedin this experiment are static So in the future work we willtransplant this algorithm to development board for dynamicreal-time testing and try to apply the ring fusion method tomore other occasions

Conflicts of Interest

The authors declare no conflicts of interest

Acknowledgments

This work was supported by the National High TechnologyResearch and Development Program (ldquo973rdquo Program) ofChina under Grant no 2016YFB0100903 Beijing MunicipalScience and Technology Commission special major under

Grant nos D171100005017002 and D171100005117002 theNational Natural Science Foundation of China under Grantno U1664263 Junior Fellowships for Advanced InnovationThink-Tank Program of China Association for Science andTechnology under Grant no DXB-ZKQN-2017-035 and theproject funded by China Postdoctoral Science Foundationunder Grant no 2017M620765

References

[1] C Guo J Meguro Y Kojima and T Naito ldquoA MultimodalADAS System for Unmarked Urban Scenarios Based onRoad Context Understandingrdquo IEEE Transactions on IntelligentTransportation Systems vol 16 no 4 pp 1690ndash1704 2015

[2] A Pandey and U C Pati ldquoDevelopment of saliency-basedseamless image compositing using hybrid blending (SSICHB)rdquoIET Image Processing vol 11 no 6 pp 433ndash442 2017

[3] Ministry of Public Security TrafficAdministration Peoplersquos Repub-lic of China Road Traffic accident statistic annual report Jiangsu

Journal of Robotics 9

ProvinceWuxiMinistry of Public Security TrafficManagementScience Research Institute 2011

[4] S Lee S J Lee J Park and H J Kim ldquoExposure correction andimage blending for planar panorama stitchingrdquo in Proceedingsof the 16th International Conference on Control Automation andSystems ICCAS 2016 pp 128ndash131 kor October 2016

[5] H Ma M Wang M Fu and C Yang ldquoA New Discrete-timeGuidance Law Base on Trajectory Learning and Predictionrdquoin Proceedings of the AIAA Guidance Navigation and ControlConference Minneapolis Minnesota

[6] C-L Su C-J Lee M-S Li and K-P Chen ldquo3D AVMsystem for automotive applicationsrdquo in Proceedings of the 10thInternational Conference on Information Communications andSignal Processing ICICS 2015 Singapore December 2015

[7] F Tian and P Shi ldquoImage Mosaic using ORB descriptor andimproved blending algorithmrdquo in Proceedings of the 2014 7thInternational Congress on Image and Signal Processing CISP2014 pp 693ndash698 China October 2014

[8] S M Santhanam V Balisavira S H Roh and V K PandeyldquoLens distortion correction and geometrical alignment forAround View Monitoring systemrdquo in Proceedings of the 18thIEEE International Symposium on Consumer Electronics ISCE2014 Republic of Korea June 2014

[9] D Suru and S Karamchandani ldquoImage fusion in variableraster media for enhancement of graphic device interfacerdquo inProceedings of the 1st International Conference on ComputingCommunication Control and Automation ICCUBEA 2015 pp733ndash736 India February 2015

[10] C Yang H Ma B Xu and M Fu ldquoAdaptive control withnearest-neighbor previous instant compensation for discrete-time nonlinear strict-feedback systemsrdquo in Proceedings of the2012 American Control Conference ACC 2012 pp 1913ndash1918can June 2012

[11] Z Jiang J Wu D Cui et al ldquoStitching Method for DistortedImage Based on SIFT Feature Matchingrdquo in Proceedings ofthe International Conference on Computing and NetworkingTechnology pp 107ndash110 2013

[12] EMUpadhyay andNK Rana ldquoExposure fusion for concealedweapon detectionrdquo in Proceedings of the 2014 2nd InternationalConference on Devices Circuits and Systems ICDCS 2014 IndiaMarch 2014

[13] I Sipiran and B Bustos ldquoHarris 3D a robust extension of theHarris operator for interest point detection on 3D meshesrdquoTheVisual Computer vol 27 no 11 pp 963ndash976 2011

[14] Y Zhao and D Xu ldquoFast image blending using seeded regiongrowingrdquo Communications in Computer and Information Sci-ence vol 525 pp 408ndash415 2015

[15] Y-K Huo G Wei Y-D Zhang and L-N Wu ldquoAn adaptivethreshold for theCannyOperator of edge detectionrdquo inProceed-ings of the 2nd International Conference on Image Analysis andSignal Processing IASPrsquo2010 pp 371ndash374 China April 2010

[16] G Peljor and T Kondo ldquoA saturation-based image fusionmethod for static scenesrdquo in Proceedings of the 6th InternationalConference on Information and Communication Technology forEmbedded Systems IC-ICTES 2015 Thailand March 2015

[17] L Jiang J Liu D Li and Z Zhu ldquo3D point sets matchingmethod based on moravec vertical interest operatorrdquo Advancesin Intelligent and Soft Computing vol 144 no 1 pp 53ndash59 2012

[18] J Lang ldquoColor image encryption based on color blend andchaos permutation in the reality-preservingmultiple-parameterfractional Fourier transform domainrdquo Optics Communicationsvol 338 pp 181ndash192 2015

[19] M Rufli D Scaramuzza and R Siegwart ldquoAutomatic detectionof checkerboards on blurred and distorted imagesrdquo in Proceed-ings of the 2008 IEEERSJ International Conference on IntelligentRobots and Systems IROS pp 3121ndash3126 France September2008

[20] J-E Scholtz K Husers M Kaup et al ldquoNon-linear imageblending improves visualization of head and neck primarysquamous cell carcinoma compared to linear blending in dual-energy CTrdquo Clinical Radiology vol 70 no 2 pp 168ndash175 2015

[21] Y Liu S Liu Y Cao and Z Wang ldquoAutomatic chessboardcorner detection methodrdquo IET Image Processing vol 10 no 1pp 16ndash23 2016

[22] Y Zhang S Deng Z Liu and Y Wang ldquoAesthetic QR CodesBased on Two-Stage Image Blendingrdquo inMultiMedia Modelingvol 8936 of Lecture Notes in Computer Science pp 183ndash194Springer International Publishing Cham 2015

[23] K Pulli M Tico and Y Xiong ldquoMobile panoramic imagingsystemrdquo in Proceedings of the 2010 IEEE Computer SocietyConference on Computer Vision and Pattern Recognition -Workshops CVPRW 2010 pp 108ndash115 USA June 2010

[24] X Zhang H Gao M Guo G Li Y Liu and D Li ldquoA study onkey technologies of unmanned drivingrdquo CAAI Transactions onIntelligence Technology vol 1 no 1 pp 4ndash13 2016

[25] Y Tang and J Shin ldquoImage Stitching with Efficient BrightnessFusion andAutomatic ContentAwarenessrdquo inProceedings of theInternational Conference on Signal Processing and MultimediaApplications pp 60ndash66 Vienna Austria August 2014

[26] H B Gao X Y Zhang T L Zhang Y C Liu and D Y LildquoResearch of intelligent vehicle variable granularity evaluationbased on cloud modelrdquo Acta Electronica Sinica vol 44 no 2pp 365ndash374 2016

[27] J-H Cha Y-S Jeon Y-S Moon and S-H Lee ldquoSeamless andfast panoramic image stitchingrdquo in Proceedings of the 2012 IEEEInternational Conference on Consumer Electronics ICCE 2012pp 29-30 USA January 2012

[28] J Liu H Ma X Ren and M Fu ldquoOptimal formation of robotsby convex hull and particle swarm optimizationrdquo in Proceedingsof the 2013 3rd IEEE Symposium on Computational Intelligencein Control and Automation CICA 2013 - 2013 IEEE SymposiumSeries on Computational Intelligence SSCI 2013 pp 104ndash111Singapore April 2013

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

Journal of Robotics 3

AB

CD

EF

1

CD

EF

OX

Y

Figure 2 The illustration of calibration scene

After themeasurement of size data the target positions ofcorner points in the coordinate of output image are calculatedby the following parameters the size data measured abovethe size of output image defined by users and the size ofcalibration pattern and vehicleThe calculation process of thetarget position of all points is the same We take the targetposition of point 1 (as shown in Figure 1) as an exampleFirstly we calculate the position of point 1 in the calibrationscene as shown in

1199091 = minus12119882 minus 119908119908 minus 11990811988711199101 = minus12119871 minus 119908119908 minus 1199081198871

(1)

where the origin of calibration scene is located at the centerof the calibration scene as shown in Figure 1 (1199091 1199101) denotesthe position of point 1119882denotes the vehicle width119871 denotesthe vehicle length 119908119908 is the white edge width and 1199081198871 is thewidth of big black box

Secondly we use the position in calibration scene tocalculate the position in the coordinate of output image asshown in

scale = 119882img119882real

1199061 = scale lowast 1199091 + 119882img2V1 = scale lowast 1199102 + 119871 img2

(2)

where scale denotes the scaling factor from calibration sceneto coordinate of output image 119882img denotes the width ofoutput image 119882real denotes the width of calibration scene

(1199061 V1) denotes the position in calibration of output image(1199091 1199101) denotes the position in calibration scene

32 Image Registration Based on Corner Detection

321 Detect and Calculate the Corner Point Positions Firstlythe corners are detected automatically in the fisheye imageby the integrated corner detection method Secondly thecorresponding positions of these corners in the correctedimage are calculated using the correction model Finallywe save the positions in the corrected image for the nextcomputation of homography matrix

Algorithm steps of integrated corner detection methodare as follows

(1) Input fisheye images of calibration scene from all 4cameras

(2) Use the Rufli corner detection method to detect thecorners in the chessboard array

(3) Based on the relative position between the black boxand the chessboard array use the detected cornersfrom step (2) to obtain the Region Of Interest (ROI)of the big black box

(4) Preprocess the ROI by adaptive binarization usingldquoadaptivethresholdrdquo function in Opencv and a mor-phological closing operation to denoise

(5) Obtain the contour of big black box fromROI and thepositions of contour vertex by ldquofindContoursrdquo func-tion in OpencvThen we use the following method toremove redundant contours

(1) Limit theminimum area of the contour accord-ing to the size ratio of the chessboard array tothe big black box and their relative positions

4 Journal of Robotics

Coordinate ofoutput image

Coordinate ofcorrected image

Xc1

Oc1

Zc1

Yc1

Xc2

Oc2

Zc2

Yc2Xc3

Oc3

Zc3

Yc3

Xc4

Oc4

Zc4

Yc4

XOYO

ZO

OO

Figure 3 The illustration of image registration and coordinate unification

the threshold of minimum area is calculated asshown in

119887 areamin = 15 lowast 119888119887areaavg (front and rear view)01 lowast 119888119887areaavg (left and right view) (3)

where 119887 areamin denotes the threshold of bigblack box area and 119888119887areaavg denotes the averagearea of small box in chessboard array

(2) Limit contour shape according to the locationof the big black box and the imaging features offisheye camera the big black box should be in afixed shape The shape restrictions are shown in

1198891 ge 015 lowast 119901 ampamp 1198892 ge 015 lowast 1199018 lowast 1198893 ge 1198894 ampamp lowast 8 lowast 1198894 ge 1198893

areacontour gt 03 lowast arearect(4)

where 1198891 and 1198892 denote the diagonal lengthof the contour 1198893 and 1198894 denote the length ofadjacent side of contour 119901 denotes perimeter ofcontour areacontour denotes the area of contourand arearect denotes the area of envelope ofcontour

(6) Use the SUSAN method to locate the exact positionsof the contour vertex around positions obtained fromstep (5)

322 Image Registration and Coordinate Unification Afterthe calibration of Figure 3 and corner detection the cornerpositions in coordinates of corrected images and their targetpositions in coordinates of output images are obtained Then

we need to unify the coordinate of 4 corrected images into thecoordinate of output image as shown in Figure 3The specificprocess is as follows Firstly we calculate the homographytransform matrix as shown in (5) The form of this matrixis shown in (6) 119875119887 = 119867 lowast 119875119906 (5)

119867 = [[[11988611 11988612 1198861311988621 11988622 1198862311988631 11988632 11988633

]]] (6)

where 119875119906 = [119909119906 119910119906 1]119879 denotes the corner position in thecoordinate of corrected image 119875119887 = [119909119887 119910119887 1]119879 denotes thetarget position in the coordinate of output image

Secondly we project every pixel of 4 corrected images intothe coordinate of output image as shown in119908 = 11988631119909119888 + 11988632119910119888 + 11988633

119909119900 = 11988611119909119888 + 11988612119910119888 + 11988613119908119910119900 = 11988621119909119888 + 11988622119910119888 + 11988623119908

(7)

where (119909119888 119910119888) denotes the pixel position in the coordinateof corrected image (119909119900 119910119900) denotes the pixel position in thecoordinate of output image

33 Image Blending As the corrected images from 4 camerasare different from each other in brightness saturation andcolor we blend them to improve the visual effect of outputimage by using ring fusion method

The detailed process is shown as follows

(1) Equalization Preprocessing The ldquoimadjustrdquo function inMatlab is used for equalization preprocessing to reduce

Journal of Robotics 5

(a) Original image (b) After imadjust (c) After ring color matching (d) After interpolation

Figure 4 The image blending result

the brightness difference among images For example theoriginal image of left view angle is shown in Figure 4(a) andthe processing result is Figure 4(b)

(2) Ring Color Matching

Step 1 (spatial transformation) As RGB space has a strongcorrelation it is not suitable for image color processing Sowetransform RGB space to the l120572120573 space where the correlationbetween three channels is the smallest The space conversionprocess includes three transformations namely 119877119866119861 rarr119862119868119864 119883119884119885 rarr 119871119872119878 rarr 119897120572120573

Firstly from RGB space to CIE XYZ space one has

[[[119883119884119885]]]= [[[

05141 03239 0160402651 06702 0064100241 01228 08444]]][[[119877119866119861]]] (8)

Secondly from CIE XYZ space to LMS space one has

[[[119871119872119878]]]= [[[

03897 06890 minus00787minus02298 11834 004640 0 1]]][[[119883119884119885]]] (9)

Since the data are scattered in the LMS space it isfurther converted to a logarithmic space with a base of 10as shown in (10) This makes the data distribution not onlymore converging but also in line with the results of thepsychological and physical research of human feeling forcolor 119871 = log 119871

119872 = log119872119878 = log 119878

(10)

Finally from LMS space to l120572120573 space one has (11) Thistransformation is based on the principal component analysis(PCA) of the data where l is the first principal component120572 is the second principal component and 120573 is the thirdprincipal component

[[[119897120572120573]]]= [[[[[[[

1radic3 0 00 1radic6 00 0 1radic2

]]]]]]][[[1 1 11 1 minus21 minus1 0

]]][[[119871119878119872]]] (11)

After the above three steps the conversion from RGB tol120572120573 space is completed

Step 2 (color registration) Firstly the mean and standarddeviations of every channel in l120572120573 space are calculatedaccording to

120583 = 1119873119873sum119894=1

V119894

120590 = radic 1119873119873sum119894=1

(V119894 minus 120583)2(12)

where 120583 denotes the mean value119873 denotes the total numberof pixels V119894 denotes the value of the pixel 119894 and 120590 indicatesthe standard deviation

Secondly the color matching factors are calculatedaccording to

119891119897 = 120590119897V1120590119897V2

6 Journal of Robotics

119891120572 = 120590120572V1120590120572V2119891120573 = 120590120573V1120590120573V2

(13)

where119891119897 denotes the factor thatmatches the color of V2 imageto V1 in channel 119897 120590119897V1 denotes the variance of V1 image inchannel 119897 120590119897V2 denotes the variance of V2 image in channel lAnd the rest is similar

Finally we match the color of images as shown in

1198971015840V2 = 119891119897 lowast (119897V1 minus 119897V1) + 119897V21205721015840V2 = 119891120572 lowast (120572V1 minus 120572V1) + 120572V21205731015840V2 = 119891120573 lowast (120573V1 minus 120573V1) + 120573V2

(14)

where 1198971015840V2 denotes pixel value of image V2 after color matchingin channel 119897 119891119897 denotes the factor of color matching inchannel 119897 119897V1 denotes pixel value of image V1 in channel 119897119897V1 denotes average pixel value of image V1 in channel 119897 119897V2denotes average pixel value of image V2 in channel 119897 And therest is similar

Step 3 (global optimization) Then we match the color ofimages from 4 cameras anticlockwise as follows to reach aglobal optimization result Firstly we match the colors of 1198814to 1198813 then 1198813 to 1198812 then 1198812 to 1198811 and finally1198814 to 1198811 whichforms a ring shape as shown in Figure 5Theprocessing resultof left view is shown in Figure 4(c)

(3)Weighted Blending After colormatching the visual effectsof output image have been greatly improved But around thestitching seam between different corrected images the visualeffect is still not enough Therefore we use (15) to ensuresmooth transition The interpolation result of left view angleimage is shown in Figure 4(d)

119874 (119894 119895) = 1198811 (119894 119895) times 119889119889max+ 1198812 (119894 119895) times (1 minus 119889119889max

) 0 lt 119889 lt 119889max

(15)

where119874(119894 119895) denotes the pixel value in output image and (119894 119895)is the position index of pixel 1198811(119894 119895) and 1198812(119894 119895) denote thecorresponding pixel value in corrected images 1198811 and 1198812 119889denotes the distance from pixel to the seam 119889max denotes thewidth of transition field as shown in Figure 5

4 Experiment Result

Some details of the experiment have been provided in part2 of this paper So in this part we only introduce the resultThe fisheye images captured from 4 cameras are shownin Figure 6 And their corresponding corrected images areshown in Figure 7 The corner detection and calculationresult are shown in Figure 8 where Figure 8(a) shows

V1

V2

V3

V4

dGR

Figure 5 The illustration of ring color matching method

Table 1 Comparison of different corner detection algorithms

Method CcN

CbN

Cc + CbN

Rufli [19] 75 0 75Harris [13] 693 165 859Tian [7] 1683 263 1946Integrated corner detection method 75 25 100

the corner positions detected in the distortion image andFigure 8(b) shows the corresponding positions calculatedin the corrected image The integrated corner detectionalgorithm is compared with several other corner detectionalgorithms in Table 1

In Table 1 Cc denotes the number of corner pointsdetected correctly in chessboard Cb denotes the number ofcorner points detected correctly in big black box N denotesall the number of corner points detected in the calibrationscene The Rufli method cannot detect vertices of the bigblack boxTheHarris and Shi-Tomasi methods cannot detectall target vertices and generate a lot of corner redundancyAnd the integrated corner detection algorithm can accuratelyextract all the target corner points of calibration pattern in thescene As a result the integrated corner detection algorithmproposed by us is effective

The output image result is shown in Figure 9 Figure 9(a)is the result before image blending and Figure 9(b) is theresult after image blending The experimental results showthat the proposed algorithm has visual effect around thestitching seam which proves that our ring fusion method iseffective

5 Conclusion

This paper has proposed a ring fusion method to obtain abetter visual effect of AVM system for intelligent drivingTo achieve this condition an integrated corner detectionmethod of image registration and a ring shape scheme forimage blending have been presented Experiment resultsprove that this designed approach is satisfactory 100 of the

Journal of Robotics 7

(a) Front view (b) Right view

(c) Back view (d) Left view

Figure 6 Fisheye images from each camera

(a) Front view (b) Right view

(c) Back view (d) Left view

Figure 7 Corresponding corrected images

8 Journal of Robotics

(a) Corner positions in distorted image (b) Corresponding positions in undistorted image

Figure 8 The corner detection and calculation result

(a) Before fusion (b) After fusion

Figure 9 Stitched bird view image of AVM

required corner is accurately and fully automatically detectedThe transition around the fusion seam is smooth with noobvious stitching trace However the images we processedin this experiment are static So in the future work we willtransplant this algorithm to development board for dynamicreal-time testing and try to apply the ring fusion method tomore other occasions

Conflicts of Interest

The authors declare no conflicts of interest

Acknowledgments

This work was supported by the National High TechnologyResearch and Development Program (ldquo973rdquo Program) ofChina under Grant no 2016YFB0100903 Beijing MunicipalScience and Technology Commission special major under

Grant nos D171100005017002 and D171100005117002 theNational Natural Science Foundation of China under Grantno U1664263 Junior Fellowships for Advanced InnovationThink-Tank Program of China Association for Science andTechnology under Grant no DXB-ZKQN-2017-035 and theproject funded by China Postdoctoral Science Foundationunder Grant no 2017M620765

References

[1] C Guo J Meguro Y Kojima and T Naito ldquoA MultimodalADAS System for Unmarked Urban Scenarios Based onRoad Context Understandingrdquo IEEE Transactions on IntelligentTransportation Systems vol 16 no 4 pp 1690ndash1704 2015

[2] A Pandey and U C Pati ldquoDevelopment of saliency-basedseamless image compositing using hybrid blending (SSICHB)rdquoIET Image Processing vol 11 no 6 pp 433ndash442 2017

[3] Ministry of Public Security TrafficAdministration Peoplersquos Repub-lic of China Road Traffic accident statistic annual report Jiangsu

Journal of Robotics 9

ProvinceWuxiMinistry of Public Security TrafficManagementScience Research Institute 2011

[4] S Lee S J Lee J Park and H J Kim ldquoExposure correction andimage blending for planar panorama stitchingrdquo in Proceedingsof the 16th International Conference on Control Automation andSystems ICCAS 2016 pp 128ndash131 kor October 2016

[5] H Ma M Wang M Fu and C Yang ldquoA New Discrete-timeGuidance Law Base on Trajectory Learning and Predictionrdquoin Proceedings of the AIAA Guidance Navigation and ControlConference Minneapolis Minnesota

[6] C-L Su C-J Lee M-S Li and K-P Chen ldquo3D AVMsystem for automotive applicationsrdquo in Proceedings of the 10thInternational Conference on Information Communications andSignal Processing ICICS 2015 Singapore December 2015

[7] F Tian and P Shi ldquoImage Mosaic using ORB descriptor andimproved blending algorithmrdquo in Proceedings of the 2014 7thInternational Congress on Image and Signal Processing CISP2014 pp 693ndash698 China October 2014

[8] S M Santhanam V Balisavira S H Roh and V K PandeyldquoLens distortion correction and geometrical alignment forAround View Monitoring systemrdquo in Proceedings of the 18thIEEE International Symposium on Consumer Electronics ISCE2014 Republic of Korea June 2014

[9] D Suru and S Karamchandani ldquoImage fusion in variableraster media for enhancement of graphic device interfacerdquo inProceedings of the 1st International Conference on ComputingCommunication Control and Automation ICCUBEA 2015 pp733ndash736 India February 2015

[10] C Yang H Ma B Xu and M Fu ldquoAdaptive control withnearest-neighbor previous instant compensation for discrete-time nonlinear strict-feedback systemsrdquo in Proceedings of the2012 American Control Conference ACC 2012 pp 1913ndash1918can June 2012

[11] Z Jiang J Wu D Cui et al ldquoStitching Method for DistortedImage Based on SIFT Feature Matchingrdquo in Proceedings ofthe International Conference on Computing and NetworkingTechnology pp 107ndash110 2013

[12] EMUpadhyay andNK Rana ldquoExposure fusion for concealedweapon detectionrdquo in Proceedings of the 2014 2nd InternationalConference on Devices Circuits and Systems ICDCS 2014 IndiaMarch 2014

[13] I Sipiran and B Bustos ldquoHarris 3D a robust extension of theHarris operator for interest point detection on 3D meshesrdquoTheVisual Computer vol 27 no 11 pp 963ndash976 2011

[14] Y Zhao and D Xu ldquoFast image blending using seeded regiongrowingrdquo Communications in Computer and Information Sci-ence vol 525 pp 408ndash415 2015

[15] Y-K Huo G Wei Y-D Zhang and L-N Wu ldquoAn adaptivethreshold for theCannyOperator of edge detectionrdquo inProceed-ings of the 2nd International Conference on Image Analysis andSignal Processing IASPrsquo2010 pp 371ndash374 China April 2010

[16] G Peljor and T Kondo ldquoA saturation-based image fusionmethod for static scenesrdquo in Proceedings of the 6th InternationalConference on Information and Communication Technology forEmbedded Systems IC-ICTES 2015 Thailand March 2015

[17] L Jiang J Liu D Li and Z Zhu ldquo3D point sets matchingmethod based on moravec vertical interest operatorrdquo Advancesin Intelligent and Soft Computing vol 144 no 1 pp 53ndash59 2012

[18] J Lang ldquoColor image encryption based on color blend andchaos permutation in the reality-preservingmultiple-parameterfractional Fourier transform domainrdquo Optics Communicationsvol 338 pp 181ndash192 2015

[19] M Rufli D Scaramuzza and R Siegwart ldquoAutomatic detectionof checkerboards on blurred and distorted imagesrdquo in Proceed-ings of the 2008 IEEERSJ International Conference on IntelligentRobots and Systems IROS pp 3121ndash3126 France September2008

[20] J-E Scholtz K Husers M Kaup et al ldquoNon-linear imageblending improves visualization of head and neck primarysquamous cell carcinoma compared to linear blending in dual-energy CTrdquo Clinical Radiology vol 70 no 2 pp 168ndash175 2015

[21] Y Liu S Liu Y Cao and Z Wang ldquoAutomatic chessboardcorner detection methodrdquo IET Image Processing vol 10 no 1pp 16ndash23 2016

[22] Y Zhang S Deng Z Liu and Y Wang ldquoAesthetic QR CodesBased on Two-Stage Image Blendingrdquo inMultiMedia Modelingvol 8936 of Lecture Notes in Computer Science pp 183ndash194Springer International Publishing Cham 2015

[23] K Pulli M Tico and Y Xiong ldquoMobile panoramic imagingsystemrdquo in Proceedings of the 2010 IEEE Computer SocietyConference on Computer Vision and Pattern Recognition -Workshops CVPRW 2010 pp 108ndash115 USA June 2010

[24] X Zhang H Gao M Guo G Li Y Liu and D Li ldquoA study onkey technologies of unmanned drivingrdquo CAAI Transactions onIntelligence Technology vol 1 no 1 pp 4ndash13 2016

[25] Y Tang and J Shin ldquoImage Stitching with Efficient BrightnessFusion andAutomatic ContentAwarenessrdquo inProceedings of theInternational Conference on Signal Processing and MultimediaApplications pp 60ndash66 Vienna Austria August 2014

[26] H B Gao X Y Zhang T L Zhang Y C Liu and D Y LildquoResearch of intelligent vehicle variable granularity evaluationbased on cloud modelrdquo Acta Electronica Sinica vol 44 no 2pp 365ndash374 2016

[27] J-H Cha Y-S Jeon Y-S Moon and S-H Lee ldquoSeamless andfast panoramic image stitchingrdquo in Proceedings of the 2012 IEEEInternational Conference on Consumer Electronics ICCE 2012pp 29-30 USA January 2012

[28] J Liu H Ma X Ren and M Fu ldquoOptimal formation of robotsby convex hull and particle swarm optimizationrdquo in Proceedingsof the 2013 3rd IEEE Symposium on Computational Intelligencein Control and Automation CICA 2013 - 2013 IEEE SymposiumSeries on Computational Intelligence SSCI 2013 pp 104ndash111Singapore April 2013

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

4 Journal of Robotics

Coordinate ofoutput image

Coordinate ofcorrected image

Xc1

Oc1

Zc1

Yc1

Xc2

Oc2

Zc2

Yc2Xc3

Oc3

Zc3

Yc3

Xc4

Oc4

Zc4

Yc4

XOYO

ZO

OO

Figure 3 The illustration of image registration and coordinate unification

the threshold of minimum area is calculated asshown in

119887 areamin = 15 lowast 119888119887areaavg (front and rear view)01 lowast 119888119887areaavg (left and right view) (3)

where 119887 areamin denotes the threshold of bigblack box area and 119888119887areaavg denotes the averagearea of small box in chessboard array

(2) Limit contour shape according to the locationof the big black box and the imaging features offisheye camera the big black box should be in afixed shape The shape restrictions are shown in

1198891 ge 015 lowast 119901 ampamp 1198892 ge 015 lowast 1199018 lowast 1198893 ge 1198894 ampamp lowast 8 lowast 1198894 ge 1198893

areacontour gt 03 lowast arearect(4)

where 1198891 and 1198892 denote the diagonal lengthof the contour 1198893 and 1198894 denote the length ofadjacent side of contour 119901 denotes perimeter ofcontour areacontour denotes the area of contourand arearect denotes the area of envelope ofcontour

(6) Use the SUSAN method to locate the exact positionsof the contour vertex around positions obtained fromstep (5)

322 Image Registration and Coordinate Unification Afterthe calibration of Figure 3 and corner detection the cornerpositions in coordinates of corrected images and their targetpositions in coordinates of output images are obtained Then

we need to unify the coordinate of 4 corrected images into thecoordinate of output image as shown in Figure 3The specificprocess is as follows Firstly we calculate the homographytransform matrix as shown in (5) The form of this matrixis shown in (6) 119875119887 = 119867 lowast 119875119906 (5)

119867 = [[[11988611 11988612 1198861311988621 11988622 1198862311988631 11988632 11988633

]]] (6)

where 119875119906 = [119909119906 119910119906 1]119879 denotes the corner position in thecoordinate of corrected image 119875119887 = [119909119887 119910119887 1]119879 denotes thetarget position in the coordinate of output image

Secondly we project every pixel of 4 corrected images intothe coordinate of output image as shown in119908 = 11988631119909119888 + 11988632119910119888 + 11988633

119909119900 = 11988611119909119888 + 11988612119910119888 + 11988613119908119910119900 = 11988621119909119888 + 11988622119910119888 + 11988623119908

(7)

where (119909119888 119910119888) denotes the pixel position in the coordinateof corrected image (119909119900 119910119900) denotes the pixel position in thecoordinate of output image

33 Image Blending As the corrected images from 4 camerasare different from each other in brightness saturation andcolor we blend them to improve the visual effect of outputimage by using ring fusion method

The detailed process is shown as follows

(1) Equalization Preprocessing The ldquoimadjustrdquo function inMatlab is used for equalization preprocessing to reduce

Journal of Robotics 5

(a) Original image (b) After imadjust (c) After ring color matching (d) After interpolation

Figure 4 The image blending result

the brightness difference among images For example theoriginal image of left view angle is shown in Figure 4(a) andthe processing result is Figure 4(b)

(2) Ring Color Matching

Step 1 (spatial transformation) As RGB space has a strongcorrelation it is not suitable for image color processing Sowetransform RGB space to the l120572120573 space where the correlationbetween three channels is the smallest The space conversionprocess includes three transformations namely 119877119866119861 rarr119862119868119864 119883119884119885 rarr 119871119872119878 rarr 119897120572120573

Firstly from RGB space to CIE XYZ space one has

[[[119883119884119885]]]= [[[

05141 03239 0160402651 06702 0064100241 01228 08444]]][[[119877119866119861]]] (8)

Secondly from CIE XYZ space to LMS space one has

[[[119871119872119878]]]= [[[

03897 06890 minus00787minus02298 11834 004640 0 1]]][[[119883119884119885]]] (9)

Since the data are scattered in the LMS space it isfurther converted to a logarithmic space with a base of 10as shown in (10) This makes the data distribution not onlymore converging but also in line with the results of thepsychological and physical research of human feeling forcolor 119871 = log 119871

119872 = log119872119878 = log 119878

(10)

Finally from LMS space to l120572120573 space one has (11) Thistransformation is based on the principal component analysis(PCA) of the data where l is the first principal component120572 is the second principal component and 120573 is the thirdprincipal component

[[[119897120572120573]]]= [[[[[[[

1radic3 0 00 1radic6 00 0 1radic2

]]]]]]][[[1 1 11 1 minus21 minus1 0

]]][[[119871119878119872]]] (11)

After the above three steps the conversion from RGB tol120572120573 space is completed

Step 2 (color registration) Firstly the mean and standarddeviations of every channel in l120572120573 space are calculatedaccording to

120583 = 1119873119873sum119894=1

V119894

120590 = radic 1119873119873sum119894=1

(V119894 minus 120583)2(12)

where 120583 denotes the mean value119873 denotes the total numberof pixels V119894 denotes the value of the pixel 119894 and 120590 indicatesthe standard deviation

Secondly the color matching factors are calculatedaccording to

119891119897 = 120590119897V1120590119897V2

6 Journal of Robotics

119891120572 = 120590120572V1120590120572V2119891120573 = 120590120573V1120590120573V2

(13)

where119891119897 denotes the factor thatmatches the color of V2 imageto V1 in channel 119897 120590119897V1 denotes the variance of V1 image inchannel 119897 120590119897V2 denotes the variance of V2 image in channel lAnd the rest is similar

Finally we match the color of images as shown in

1198971015840V2 = 119891119897 lowast (119897V1 minus 119897V1) + 119897V21205721015840V2 = 119891120572 lowast (120572V1 minus 120572V1) + 120572V21205731015840V2 = 119891120573 lowast (120573V1 minus 120573V1) + 120573V2

(14)

where 1198971015840V2 denotes pixel value of image V2 after color matchingin channel 119897 119891119897 denotes the factor of color matching inchannel 119897 119897V1 denotes pixel value of image V1 in channel 119897119897V1 denotes average pixel value of image V1 in channel 119897 119897V2denotes average pixel value of image V2 in channel 119897 And therest is similar

Step 3 (global optimization) Then we match the color ofimages from 4 cameras anticlockwise as follows to reach aglobal optimization result Firstly we match the colors of 1198814to 1198813 then 1198813 to 1198812 then 1198812 to 1198811 and finally1198814 to 1198811 whichforms a ring shape as shown in Figure 5Theprocessing resultof left view is shown in Figure 4(c)

(3)Weighted Blending After colormatching the visual effectsof output image have been greatly improved But around thestitching seam between different corrected images the visualeffect is still not enough Therefore we use (15) to ensuresmooth transition The interpolation result of left view angleimage is shown in Figure 4(d)

119874 (119894 119895) = 1198811 (119894 119895) times 119889119889max+ 1198812 (119894 119895) times (1 minus 119889119889max

) 0 lt 119889 lt 119889max

(15)

where119874(119894 119895) denotes the pixel value in output image and (119894 119895)is the position index of pixel 1198811(119894 119895) and 1198812(119894 119895) denote thecorresponding pixel value in corrected images 1198811 and 1198812 119889denotes the distance from pixel to the seam 119889max denotes thewidth of transition field as shown in Figure 5

4 Experiment Result

Some details of the experiment have been provided in part2 of this paper So in this part we only introduce the resultThe fisheye images captured from 4 cameras are shownin Figure 6 And their corresponding corrected images areshown in Figure 7 The corner detection and calculationresult are shown in Figure 8 where Figure 8(a) shows

V1

V2

V3

V4

dGR

Figure 5 The illustration of ring color matching method

Table 1 Comparison of different corner detection algorithms

Method CcN

CbN

Cc + CbN

Rufli [19] 75 0 75Harris [13] 693 165 859Tian [7] 1683 263 1946Integrated corner detection method 75 25 100

the corner positions detected in the distortion image andFigure 8(b) shows the corresponding positions calculatedin the corrected image The integrated corner detectionalgorithm is compared with several other corner detectionalgorithms in Table 1

In Table 1 Cc denotes the number of corner pointsdetected correctly in chessboard Cb denotes the number ofcorner points detected correctly in big black box N denotesall the number of corner points detected in the calibrationscene The Rufli method cannot detect vertices of the bigblack boxTheHarris and Shi-Tomasi methods cannot detectall target vertices and generate a lot of corner redundancyAnd the integrated corner detection algorithm can accuratelyextract all the target corner points of calibration pattern in thescene As a result the integrated corner detection algorithmproposed by us is effective

The output image result is shown in Figure 9 Figure 9(a)is the result before image blending and Figure 9(b) is theresult after image blending The experimental results showthat the proposed algorithm has visual effect around thestitching seam which proves that our ring fusion method iseffective

5 Conclusion

This paper has proposed a ring fusion method to obtain abetter visual effect of AVM system for intelligent drivingTo achieve this condition an integrated corner detectionmethod of image registration and a ring shape scheme forimage blending have been presented Experiment resultsprove that this designed approach is satisfactory 100 of the

Journal of Robotics 7

(a) Front view (b) Right view

(c) Back view (d) Left view

Figure 6 Fisheye images from each camera

(a) Front view (b) Right view

(c) Back view (d) Left view

Figure 7 Corresponding corrected images

8 Journal of Robotics

(a) Corner positions in distorted image (b) Corresponding positions in undistorted image

Figure 8 The corner detection and calculation result

(a) Before fusion (b) After fusion

Figure 9 Stitched bird view image of AVM

required corner is accurately and fully automatically detectedThe transition around the fusion seam is smooth with noobvious stitching trace However the images we processedin this experiment are static So in the future work we willtransplant this algorithm to development board for dynamicreal-time testing and try to apply the ring fusion method tomore other occasions

Conflicts of Interest

The authors declare no conflicts of interest

Acknowledgments

This work was supported by the National High TechnologyResearch and Development Program (ldquo973rdquo Program) ofChina under Grant no 2016YFB0100903 Beijing MunicipalScience and Technology Commission special major under

Grant nos D171100005017002 and D171100005117002 theNational Natural Science Foundation of China under Grantno U1664263 Junior Fellowships for Advanced InnovationThink-Tank Program of China Association for Science andTechnology under Grant no DXB-ZKQN-2017-035 and theproject funded by China Postdoctoral Science Foundationunder Grant no 2017M620765

References

[1] C Guo J Meguro Y Kojima and T Naito ldquoA MultimodalADAS System for Unmarked Urban Scenarios Based onRoad Context Understandingrdquo IEEE Transactions on IntelligentTransportation Systems vol 16 no 4 pp 1690ndash1704 2015

[2] A Pandey and U C Pati ldquoDevelopment of saliency-basedseamless image compositing using hybrid blending (SSICHB)rdquoIET Image Processing vol 11 no 6 pp 433ndash442 2017

[3] Ministry of Public Security TrafficAdministration Peoplersquos Repub-lic of China Road Traffic accident statistic annual report Jiangsu

Journal of Robotics 9

ProvinceWuxiMinistry of Public Security TrafficManagementScience Research Institute 2011

[4] S Lee S J Lee J Park and H J Kim ldquoExposure correction andimage blending for planar panorama stitchingrdquo in Proceedingsof the 16th International Conference on Control Automation andSystems ICCAS 2016 pp 128ndash131 kor October 2016

[5] H Ma M Wang M Fu and C Yang ldquoA New Discrete-timeGuidance Law Base on Trajectory Learning and Predictionrdquoin Proceedings of the AIAA Guidance Navigation and ControlConference Minneapolis Minnesota

[6] C-L Su C-J Lee M-S Li and K-P Chen ldquo3D AVMsystem for automotive applicationsrdquo in Proceedings of the 10thInternational Conference on Information Communications andSignal Processing ICICS 2015 Singapore December 2015

[7] F Tian and P Shi ldquoImage Mosaic using ORB descriptor andimproved blending algorithmrdquo in Proceedings of the 2014 7thInternational Congress on Image and Signal Processing CISP2014 pp 693ndash698 China October 2014

[8] S M Santhanam V Balisavira S H Roh and V K PandeyldquoLens distortion correction and geometrical alignment forAround View Monitoring systemrdquo in Proceedings of the 18thIEEE International Symposium on Consumer Electronics ISCE2014 Republic of Korea June 2014

[9] D Suru and S Karamchandani ldquoImage fusion in variableraster media for enhancement of graphic device interfacerdquo inProceedings of the 1st International Conference on ComputingCommunication Control and Automation ICCUBEA 2015 pp733ndash736 India February 2015

[10] C Yang H Ma B Xu and M Fu ldquoAdaptive control withnearest-neighbor previous instant compensation for discrete-time nonlinear strict-feedback systemsrdquo in Proceedings of the2012 American Control Conference ACC 2012 pp 1913ndash1918can June 2012

[11] Z Jiang J Wu D Cui et al ldquoStitching Method for DistortedImage Based on SIFT Feature Matchingrdquo in Proceedings ofthe International Conference on Computing and NetworkingTechnology pp 107ndash110 2013

[12] EMUpadhyay andNK Rana ldquoExposure fusion for concealedweapon detectionrdquo in Proceedings of the 2014 2nd InternationalConference on Devices Circuits and Systems ICDCS 2014 IndiaMarch 2014

[13] I Sipiran and B Bustos ldquoHarris 3D a robust extension of theHarris operator for interest point detection on 3D meshesrdquoTheVisual Computer vol 27 no 11 pp 963ndash976 2011

[14] Y Zhao and D Xu ldquoFast image blending using seeded regiongrowingrdquo Communications in Computer and Information Sci-ence vol 525 pp 408ndash415 2015

[15] Y-K Huo G Wei Y-D Zhang and L-N Wu ldquoAn adaptivethreshold for theCannyOperator of edge detectionrdquo inProceed-ings of the 2nd International Conference on Image Analysis andSignal Processing IASPrsquo2010 pp 371ndash374 China April 2010

[16] G Peljor and T Kondo ldquoA saturation-based image fusionmethod for static scenesrdquo in Proceedings of the 6th InternationalConference on Information and Communication Technology forEmbedded Systems IC-ICTES 2015 Thailand March 2015

[17] L Jiang J Liu D Li and Z Zhu ldquo3D point sets matchingmethod based on moravec vertical interest operatorrdquo Advancesin Intelligent and Soft Computing vol 144 no 1 pp 53ndash59 2012

[18] J Lang ldquoColor image encryption based on color blend andchaos permutation in the reality-preservingmultiple-parameterfractional Fourier transform domainrdquo Optics Communicationsvol 338 pp 181ndash192 2015

[19] M Rufli D Scaramuzza and R Siegwart ldquoAutomatic detectionof checkerboards on blurred and distorted imagesrdquo in Proceed-ings of the 2008 IEEERSJ International Conference on IntelligentRobots and Systems IROS pp 3121ndash3126 France September2008

[20] J-E Scholtz K Husers M Kaup et al ldquoNon-linear imageblending improves visualization of head and neck primarysquamous cell carcinoma compared to linear blending in dual-energy CTrdquo Clinical Radiology vol 70 no 2 pp 168ndash175 2015

[21] Y Liu S Liu Y Cao and Z Wang ldquoAutomatic chessboardcorner detection methodrdquo IET Image Processing vol 10 no 1pp 16ndash23 2016

[22] Y Zhang S Deng Z Liu and Y Wang ldquoAesthetic QR CodesBased on Two-Stage Image Blendingrdquo inMultiMedia Modelingvol 8936 of Lecture Notes in Computer Science pp 183ndash194Springer International Publishing Cham 2015

[23] K Pulli M Tico and Y Xiong ldquoMobile panoramic imagingsystemrdquo in Proceedings of the 2010 IEEE Computer SocietyConference on Computer Vision and Pattern Recognition -Workshops CVPRW 2010 pp 108ndash115 USA June 2010

[24] X Zhang H Gao M Guo G Li Y Liu and D Li ldquoA study onkey technologies of unmanned drivingrdquo CAAI Transactions onIntelligence Technology vol 1 no 1 pp 4ndash13 2016

[25] Y Tang and J Shin ldquoImage Stitching with Efficient BrightnessFusion andAutomatic ContentAwarenessrdquo inProceedings of theInternational Conference on Signal Processing and MultimediaApplications pp 60ndash66 Vienna Austria August 2014

[26] H B Gao X Y Zhang T L Zhang Y C Liu and D Y LildquoResearch of intelligent vehicle variable granularity evaluationbased on cloud modelrdquo Acta Electronica Sinica vol 44 no 2pp 365ndash374 2016

[27] J-H Cha Y-S Jeon Y-S Moon and S-H Lee ldquoSeamless andfast panoramic image stitchingrdquo in Proceedings of the 2012 IEEEInternational Conference on Consumer Electronics ICCE 2012pp 29-30 USA January 2012

[28] J Liu H Ma X Ren and M Fu ldquoOptimal formation of robotsby convex hull and particle swarm optimizationrdquo in Proceedingsof the 2013 3rd IEEE Symposium on Computational Intelligencein Control and Automation CICA 2013 - 2013 IEEE SymposiumSeries on Computational Intelligence SSCI 2013 pp 104ndash111Singapore April 2013

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

Journal of Robotics 5

(a) Original image (b) After imadjust (c) After ring color matching (d) After interpolation

Figure 4 The image blending result

the brightness difference among images For example theoriginal image of left view angle is shown in Figure 4(a) andthe processing result is Figure 4(b)

(2) Ring Color Matching

Step 1 (spatial transformation) As RGB space has a strongcorrelation it is not suitable for image color processing Sowetransform RGB space to the l120572120573 space where the correlationbetween three channels is the smallest The space conversionprocess includes three transformations namely 119877119866119861 rarr119862119868119864 119883119884119885 rarr 119871119872119878 rarr 119897120572120573

Firstly from RGB space to CIE XYZ space one has

[[[119883119884119885]]]= [[[

05141 03239 0160402651 06702 0064100241 01228 08444]]][[[119877119866119861]]] (8)

Secondly from CIE XYZ space to LMS space one has

[[[119871119872119878]]]= [[[

03897 06890 minus00787minus02298 11834 004640 0 1]]][[[119883119884119885]]] (9)

Since the data are scattered in the LMS space it isfurther converted to a logarithmic space with a base of 10as shown in (10) This makes the data distribution not onlymore converging but also in line with the results of thepsychological and physical research of human feeling forcolor 119871 = log 119871

119872 = log119872119878 = log 119878

(10)

Finally from LMS space to l120572120573 space one has (11) Thistransformation is based on the principal component analysis(PCA) of the data where l is the first principal component120572 is the second principal component and 120573 is the thirdprincipal component

[[[119897120572120573]]]= [[[[[[[

1radic3 0 00 1radic6 00 0 1radic2

]]]]]]][[[1 1 11 1 minus21 minus1 0

]]][[[119871119878119872]]] (11)

After the above three steps the conversion from RGB tol120572120573 space is completed

Step 2 (color registration) Firstly the mean and standarddeviations of every channel in l120572120573 space are calculatedaccording to

120583 = 1119873119873sum119894=1

V119894

120590 = radic 1119873119873sum119894=1

(V119894 minus 120583)2(12)

where 120583 denotes the mean value119873 denotes the total numberof pixels V119894 denotes the value of the pixel 119894 and 120590 indicatesthe standard deviation

Secondly the color matching factors are calculatedaccording to

119891119897 = 120590119897V1120590119897V2

6 Journal of Robotics

119891120572 = 120590120572V1120590120572V2119891120573 = 120590120573V1120590120573V2

(13)

where119891119897 denotes the factor thatmatches the color of V2 imageto V1 in channel 119897 120590119897V1 denotes the variance of V1 image inchannel 119897 120590119897V2 denotes the variance of V2 image in channel lAnd the rest is similar

Finally we match the color of images as shown in

1198971015840V2 = 119891119897 lowast (119897V1 minus 119897V1) + 119897V21205721015840V2 = 119891120572 lowast (120572V1 minus 120572V1) + 120572V21205731015840V2 = 119891120573 lowast (120573V1 minus 120573V1) + 120573V2

(14)

where 1198971015840V2 denotes pixel value of image V2 after color matchingin channel 119897 119891119897 denotes the factor of color matching inchannel 119897 119897V1 denotes pixel value of image V1 in channel 119897119897V1 denotes average pixel value of image V1 in channel 119897 119897V2denotes average pixel value of image V2 in channel 119897 And therest is similar

Step 3 (global optimization) Then we match the color ofimages from 4 cameras anticlockwise as follows to reach aglobal optimization result Firstly we match the colors of 1198814to 1198813 then 1198813 to 1198812 then 1198812 to 1198811 and finally1198814 to 1198811 whichforms a ring shape as shown in Figure 5Theprocessing resultof left view is shown in Figure 4(c)

(3)Weighted Blending After colormatching the visual effectsof output image have been greatly improved But around thestitching seam between different corrected images the visualeffect is still not enough Therefore we use (15) to ensuresmooth transition The interpolation result of left view angleimage is shown in Figure 4(d)

119874 (119894 119895) = 1198811 (119894 119895) times 119889119889max+ 1198812 (119894 119895) times (1 minus 119889119889max

) 0 lt 119889 lt 119889max

(15)

where119874(119894 119895) denotes the pixel value in output image and (119894 119895)is the position index of pixel 1198811(119894 119895) and 1198812(119894 119895) denote thecorresponding pixel value in corrected images 1198811 and 1198812 119889denotes the distance from pixel to the seam 119889max denotes thewidth of transition field as shown in Figure 5

4 Experiment Result

Some details of the experiment have been provided in part2 of this paper So in this part we only introduce the resultThe fisheye images captured from 4 cameras are shownin Figure 6 And their corresponding corrected images areshown in Figure 7 The corner detection and calculationresult are shown in Figure 8 where Figure 8(a) shows

V1

V2

V3

V4

dGR

Figure 5 The illustration of ring color matching method

Table 1 Comparison of different corner detection algorithms

Method CcN

CbN

Cc + CbN

Rufli [19] 75 0 75Harris [13] 693 165 859Tian [7] 1683 263 1946Integrated corner detection method 75 25 100

the corner positions detected in the distortion image andFigure 8(b) shows the corresponding positions calculatedin the corrected image The integrated corner detectionalgorithm is compared with several other corner detectionalgorithms in Table 1

In Table 1 Cc denotes the number of corner pointsdetected correctly in chessboard Cb denotes the number ofcorner points detected correctly in big black box N denotesall the number of corner points detected in the calibrationscene The Rufli method cannot detect vertices of the bigblack boxTheHarris and Shi-Tomasi methods cannot detectall target vertices and generate a lot of corner redundancyAnd the integrated corner detection algorithm can accuratelyextract all the target corner points of calibration pattern in thescene As a result the integrated corner detection algorithmproposed by us is effective

The output image result is shown in Figure 9 Figure 9(a)is the result before image blending and Figure 9(b) is theresult after image blending The experimental results showthat the proposed algorithm has visual effect around thestitching seam which proves that our ring fusion method iseffective

5 Conclusion

This paper has proposed a ring fusion method to obtain abetter visual effect of AVM system for intelligent drivingTo achieve this condition an integrated corner detectionmethod of image registration and a ring shape scheme forimage blending have been presented Experiment resultsprove that this designed approach is satisfactory 100 of the

Journal of Robotics 7

(a) Front view (b) Right view

(c) Back view (d) Left view

Figure 6 Fisheye images from each camera

(a) Front view (b) Right view

(c) Back view (d) Left view

Figure 7 Corresponding corrected images

8 Journal of Robotics

(a) Corner positions in distorted image (b) Corresponding positions in undistorted image

Figure 8 The corner detection and calculation result

(a) Before fusion (b) After fusion

Figure 9 Stitched bird view image of AVM

required corner is accurately and fully automatically detectedThe transition around the fusion seam is smooth with noobvious stitching trace However the images we processedin this experiment are static So in the future work we willtransplant this algorithm to development board for dynamicreal-time testing and try to apply the ring fusion method tomore other occasions

Conflicts of Interest

The authors declare no conflicts of interest

Acknowledgments

This work was supported by the National High TechnologyResearch and Development Program (ldquo973rdquo Program) ofChina under Grant no 2016YFB0100903 Beijing MunicipalScience and Technology Commission special major under

Grant nos D171100005017002 and D171100005117002 theNational Natural Science Foundation of China under Grantno U1664263 Junior Fellowships for Advanced InnovationThink-Tank Program of China Association for Science andTechnology under Grant no DXB-ZKQN-2017-035 and theproject funded by China Postdoctoral Science Foundationunder Grant no 2017M620765

References

[1] C Guo J Meguro Y Kojima and T Naito ldquoA MultimodalADAS System for Unmarked Urban Scenarios Based onRoad Context Understandingrdquo IEEE Transactions on IntelligentTransportation Systems vol 16 no 4 pp 1690ndash1704 2015

[2] A Pandey and U C Pati ldquoDevelopment of saliency-basedseamless image compositing using hybrid blending (SSICHB)rdquoIET Image Processing vol 11 no 6 pp 433ndash442 2017

[3] Ministry of Public Security TrafficAdministration Peoplersquos Repub-lic of China Road Traffic accident statistic annual report Jiangsu

Journal of Robotics 9

ProvinceWuxiMinistry of Public Security TrafficManagementScience Research Institute 2011

[4] S Lee S J Lee J Park and H J Kim ldquoExposure correction andimage blending for planar panorama stitchingrdquo in Proceedingsof the 16th International Conference on Control Automation andSystems ICCAS 2016 pp 128ndash131 kor October 2016

[5] H Ma M Wang M Fu and C Yang ldquoA New Discrete-timeGuidance Law Base on Trajectory Learning and Predictionrdquoin Proceedings of the AIAA Guidance Navigation and ControlConference Minneapolis Minnesota

[6] C-L Su C-J Lee M-S Li and K-P Chen ldquo3D AVMsystem for automotive applicationsrdquo in Proceedings of the 10thInternational Conference on Information Communications andSignal Processing ICICS 2015 Singapore December 2015

[7] F Tian and P Shi ldquoImage Mosaic using ORB descriptor andimproved blending algorithmrdquo in Proceedings of the 2014 7thInternational Congress on Image and Signal Processing CISP2014 pp 693ndash698 China October 2014

[8] S M Santhanam V Balisavira S H Roh and V K PandeyldquoLens distortion correction and geometrical alignment forAround View Monitoring systemrdquo in Proceedings of the 18thIEEE International Symposium on Consumer Electronics ISCE2014 Republic of Korea June 2014

[9] D Suru and S Karamchandani ldquoImage fusion in variableraster media for enhancement of graphic device interfacerdquo inProceedings of the 1st International Conference on ComputingCommunication Control and Automation ICCUBEA 2015 pp733ndash736 India February 2015

[10] C Yang H Ma B Xu and M Fu ldquoAdaptive control withnearest-neighbor previous instant compensation for discrete-time nonlinear strict-feedback systemsrdquo in Proceedings of the2012 American Control Conference ACC 2012 pp 1913ndash1918can June 2012

[11] Z Jiang J Wu D Cui et al ldquoStitching Method for DistortedImage Based on SIFT Feature Matchingrdquo in Proceedings ofthe International Conference on Computing and NetworkingTechnology pp 107ndash110 2013

[12] EMUpadhyay andNK Rana ldquoExposure fusion for concealedweapon detectionrdquo in Proceedings of the 2014 2nd InternationalConference on Devices Circuits and Systems ICDCS 2014 IndiaMarch 2014

[13] I Sipiran and B Bustos ldquoHarris 3D a robust extension of theHarris operator for interest point detection on 3D meshesrdquoTheVisual Computer vol 27 no 11 pp 963ndash976 2011

[14] Y Zhao and D Xu ldquoFast image blending using seeded regiongrowingrdquo Communications in Computer and Information Sci-ence vol 525 pp 408ndash415 2015

[15] Y-K Huo G Wei Y-D Zhang and L-N Wu ldquoAn adaptivethreshold for theCannyOperator of edge detectionrdquo inProceed-ings of the 2nd International Conference on Image Analysis andSignal Processing IASPrsquo2010 pp 371ndash374 China April 2010

[16] G Peljor and T Kondo ldquoA saturation-based image fusionmethod for static scenesrdquo in Proceedings of the 6th InternationalConference on Information and Communication Technology forEmbedded Systems IC-ICTES 2015 Thailand March 2015

[17] L Jiang J Liu D Li and Z Zhu ldquo3D point sets matchingmethod based on moravec vertical interest operatorrdquo Advancesin Intelligent and Soft Computing vol 144 no 1 pp 53ndash59 2012

[18] J Lang ldquoColor image encryption based on color blend andchaos permutation in the reality-preservingmultiple-parameterfractional Fourier transform domainrdquo Optics Communicationsvol 338 pp 181ndash192 2015

[19] M Rufli D Scaramuzza and R Siegwart ldquoAutomatic detectionof checkerboards on blurred and distorted imagesrdquo in Proceed-ings of the 2008 IEEERSJ International Conference on IntelligentRobots and Systems IROS pp 3121ndash3126 France September2008

[20] J-E Scholtz K Husers M Kaup et al ldquoNon-linear imageblending improves visualization of head and neck primarysquamous cell carcinoma compared to linear blending in dual-energy CTrdquo Clinical Radiology vol 70 no 2 pp 168ndash175 2015

[21] Y Liu S Liu Y Cao and Z Wang ldquoAutomatic chessboardcorner detection methodrdquo IET Image Processing vol 10 no 1pp 16ndash23 2016

[22] Y Zhang S Deng Z Liu and Y Wang ldquoAesthetic QR CodesBased on Two-Stage Image Blendingrdquo inMultiMedia Modelingvol 8936 of Lecture Notes in Computer Science pp 183ndash194Springer International Publishing Cham 2015

[23] K Pulli M Tico and Y Xiong ldquoMobile panoramic imagingsystemrdquo in Proceedings of the 2010 IEEE Computer SocietyConference on Computer Vision and Pattern Recognition -Workshops CVPRW 2010 pp 108ndash115 USA June 2010

[24] X Zhang H Gao M Guo G Li Y Liu and D Li ldquoA study onkey technologies of unmanned drivingrdquo CAAI Transactions onIntelligence Technology vol 1 no 1 pp 4ndash13 2016

[25] Y Tang and J Shin ldquoImage Stitching with Efficient BrightnessFusion andAutomatic ContentAwarenessrdquo inProceedings of theInternational Conference on Signal Processing and MultimediaApplications pp 60ndash66 Vienna Austria August 2014

[26] H B Gao X Y Zhang T L Zhang Y C Liu and D Y LildquoResearch of intelligent vehicle variable granularity evaluationbased on cloud modelrdquo Acta Electronica Sinica vol 44 no 2pp 365ndash374 2016

[27] J-H Cha Y-S Jeon Y-S Moon and S-H Lee ldquoSeamless andfast panoramic image stitchingrdquo in Proceedings of the 2012 IEEEInternational Conference on Consumer Electronics ICCE 2012pp 29-30 USA January 2012

[28] J Liu H Ma X Ren and M Fu ldquoOptimal formation of robotsby convex hull and particle swarm optimizationrdquo in Proceedingsof the 2013 3rd IEEE Symposium on Computational Intelligencein Control and Automation CICA 2013 - 2013 IEEE SymposiumSeries on Computational Intelligence SSCI 2013 pp 104ndash111Singapore April 2013

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

6 Journal of Robotics

119891120572 = 120590120572V1120590120572V2119891120573 = 120590120573V1120590120573V2

(13)

where119891119897 denotes the factor thatmatches the color of V2 imageto V1 in channel 119897 120590119897V1 denotes the variance of V1 image inchannel 119897 120590119897V2 denotes the variance of V2 image in channel lAnd the rest is similar

Finally we match the color of images as shown in

1198971015840V2 = 119891119897 lowast (119897V1 minus 119897V1) + 119897V21205721015840V2 = 119891120572 lowast (120572V1 minus 120572V1) + 120572V21205731015840V2 = 119891120573 lowast (120573V1 minus 120573V1) + 120573V2

(14)

where 1198971015840V2 denotes pixel value of image V2 after color matchingin channel 119897 119891119897 denotes the factor of color matching inchannel 119897 119897V1 denotes pixel value of image V1 in channel 119897119897V1 denotes average pixel value of image V1 in channel 119897 119897V2denotes average pixel value of image V2 in channel 119897 And therest is similar

Step 3 (global optimization) Then we match the color ofimages from 4 cameras anticlockwise as follows to reach aglobal optimization result Firstly we match the colors of 1198814to 1198813 then 1198813 to 1198812 then 1198812 to 1198811 and finally1198814 to 1198811 whichforms a ring shape as shown in Figure 5Theprocessing resultof left view is shown in Figure 4(c)

(3)Weighted Blending After colormatching the visual effectsof output image have been greatly improved But around thestitching seam between different corrected images the visualeffect is still not enough Therefore we use (15) to ensuresmooth transition The interpolation result of left view angleimage is shown in Figure 4(d)

119874 (119894 119895) = 1198811 (119894 119895) times 119889119889max+ 1198812 (119894 119895) times (1 minus 119889119889max

) 0 lt 119889 lt 119889max

(15)

where119874(119894 119895) denotes the pixel value in output image and (119894 119895)is the position index of pixel 1198811(119894 119895) and 1198812(119894 119895) denote thecorresponding pixel value in corrected images 1198811 and 1198812 119889denotes the distance from pixel to the seam 119889max denotes thewidth of transition field as shown in Figure 5

4 Experiment Result

Some details of the experiment have been provided in part2 of this paper So in this part we only introduce the resultThe fisheye images captured from 4 cameras are shownin Figure 6 And their corresponding corrected images areshown in Figure 7 The corner detection and calculationresult are shown in Figure 8 where Figure 8(a) shows

V1

V2

V3

V4

dGR

Figure 5 The illustration of ring color matching method

Table 1 Comparison of different corner detection algorithms

Method CcN

CbN

Cc + CbN

Rufli [19] 75 0 75Harris [13] 693 165 859Tian [7] 1683 263 1946Integrated corner detection method 75 25 100

the corner positions detected in the distortion image andFigure 8(b) shows the corresponding positions calculatedin the corrected image The integrated corner detectionalgorithm is compared with several other corner detectionalgorithms in Table 1

In Table 1 Cc denotes the number of corner pointsdetected correctly in chessboard Cb denotes the number ofcorner points detected correctly in big black box N denotesall the number of corner points detected in the calibrationscene The Rufli method cannot detect vertices of the bigblack boxTheHarris and Shi-Tomasi methods cannot detectall target vertices and generate a lot of corner redundancyAnd the integrated corner detection algorithm can accuratelyextract all the target corner points of calibration pattern in thescene As a result the integrated corner detection algorithmproposed by us is effective

The output image result is shown in Figure 9 Figure 9(a)is the result before image blending and Figure 9(b) is theresult after image blending The experimental results showthat the proposed algorithm has visual effect around thestitching seam which proves that our ring fusion method iseffective

5 Conclusion

This paper has proposed a ring fusion method to obtain abetter visual effect of AVM system for intelligent drivingTo achieve this condition an integrated corner detectionmethod of image registration and a ring shape scheme forimage blending have been presented Experiment resultsprove that this designed approach is satisfactory 100 of the

Journal of Robotics 7

(a) Front view (b) Right view

(c) Back view (d) Left view

Figure 6 Fisheye images from each camera

(a) Front view (b) Right view

(c) Back view (d) Left view

Figure 7 Corresponding corrected images

8 Journal of Robotics

(a) Corner positions in distorted image (b) Corresponding positions in undistorted image

Figure 8 The corner detection and calculation result

(a) Before fusion (b) After fusion

Figure 9 Stitched bird view image of AVM

required corner is accurately and fully automatically detectedThe transition around the fusion seam is smooth with noobvious stitching trace However the images we processedin this experiment are static So in the future work we willtransplant this algorithm to development board for dynamicreal-time testing and try to apply the ring fusion method tomore other occasions

Conflicts of Interest

The authors declare no conflicts of interest

Acknowledgments

This work was supported by the National High TechnologyResearch and Development Program (ldquo973rdquo Program) ofChina under Grant no 2016YFB0100903 Beijing MunicipalScience and Technology Commission special major under

Grant nos D171100005017002 and D171100005117002 theNational Natural Science Foundation of China under Grantno U1664263 Junior Fellowships for Advanced InnovationThink-Tank Program of China Association for Science andTechnology under Grant no DXB-ZKQN-2017-035 and theproject funded by China Postdoctoral Science Foundationunder Grant no 2017M620765

References

[1] C Guo J Meguro Y Kojima and T Naito ldquoA MultimodalADAS System for Unmarked Urban Scenarios Based onRoad Context Understandingrdquo IEEE Transactions on IntelligentTransportation Systems vol 16 no 4 pp 1690ndash1704 2015

[2] A Pandey and U C Pati ldquoDevelopment of saliency-basedseamless image compositing using hybrid blending (SSICHB)rdquoIET Image Processing vol 11 no 6 pp 433ndash442 2017

[3] Ministry of Public Security TrafficAdministration Peoplersquos Repub-lic of China Road Traffic accident statistic annual report Jiangsu

Journal of Robotics 9

ProvinceWuxiMinistry of Public Security TrafficManagementScience Research Institute 2011

[4] S Lee S J Lee J Park and H J Kim ldquoExposure correction andimage blending for planar panorama stitchingrdquo in Proceedingsof the 16th International Conference on Control Automation andSystems ICCAS 2016 pp 128ndash131 kor October 2016

[5] H Ma M Wang M Fu and C Yang ldquoA New Discrete-timeGuidance Law Base on Trajectory Learning and Predictionrdquoin Proceedings of the AIAA Guidance Navigation and ControlConference Minneapolis Minnesota

[6] C-L Su C-J Lee M-S Li and K-P Chen ldquo3D AVMsystem for automotive applicationsrdquo in Proceedings of the 10thInternational Conference on Information Communications andSignal Processing ICICS 2015 Singapore December 2015

[7] F Tian and P Shi ldquoImage Mosaic using ORB descriptor andimproved blending algorithmrdquo in Proceedings of the 2014 7thInternational Congress on Image and Signal Processing CISP2014 pp 693ndash698 China October 2014

[8] S M Santhanam V Balisavira S H Roh and V K PandeyldquoLens distortion correction and geometrical alignment forAround View Monitoring systemrdquo in Proceedings of the 18thIEEE International Symposium on Consumer Electronics ISCE2014 Republic of Korea June 2014

[9] D Suru and S Karamchandani ldquoImage fusion in variableraster media for enhancement of graphic device interfacerdquo inProceedings of the 1st International Conference on ComputingCommunication Control and Automation ICCUBEA 2015 pp733ndash736 India February 2015

[10] C Yang H Ma B Xu and M Fu ldquoAdaptive control withnearest-neighbor previous instant compensation for discrete-time nonlinear strict-feedback systemsrdquo in Proceedings of the2012 American Control Conference ACC 2012 pp 1913ndash1918can June 2012

[11] Z Jiang J Wu D Cui et al ldquoStitching Method for DistortedImage Based on SIFT Feature Matchingrdquo in Proceedings ofthe International Conference on Computing and NetworkingTechnology pp 107ndash110 2013

[12] EMUpadhyay andNK Rana ldquoExposure fusion for concealedweapon detectionrdquo in Proceedings of the 2014 2nd InternationalConference on Devices Circuits and Systems ICDCS 2014 IndiaMarch 2014

[13] I Sipiran and B Bustos ldquoHarris 3D a robust extension of theHarris operator for interest point detection on 3D meshesrdquoTheVisual Computer vol 27 no 11 pp 963ndash976 2011

[14] Y Zhao and D Xu ldquoFast image blending using seeded regiongrowingrdquo Communications in Computer and Information Sci-ence vol 525 pp 408ndash415 2015

[15] Y-K Huo G Wei Y-D Zhang and L-N Wu ldquoAn adaptivethreshold for theCannyOperator of edge detectionrdquo inProceed-ings of the 2nd International Conference on Image Analysis andSignal Processing IASPrsquo2010 pp 371ndash374 China April 2010

[16] G Peljor and T Kondo ldquoA saturation-based image fusionmethod for static scenesrdquo in Proceedings of the 6th InternationalConference on Information and Communication Technology forEmbedded Systems IC-ICTES 2015 Thailand March 2015

[17] L Jiang J Liu D Li and Z Zhu ldquo3D point sets matchingmethod based on moravec vertical interest operatorrdquo Advancesin Intelligent and Soft Computing vol 144 no 1 pp 53ndash59 2012

[18] J Lang ldquoColor image encryption based on color blend andchaos permutation in the reality-preservingmultiple-parameterfractional Fourier transform domainrdquo Optics Communicationsvol 338 pp 181ndash192 2015

[19] M Rufli D Scaramuzza and R Siegwart ldquoAutomatic detectionof checkerboards on blurred and distorted imagesrdquo in Proceed-ings of the 2008 IEEERSJ International Conference on IntelligentRobots and Systems IROS pp 3121ndash3126 France September2008

[20] J-E Scholtz K Husers M Kaup et al ldquoNon-linear imageblending improves visualization of head and neck primarysquamous cell carcinoma compared to linear blending in dual-energy CTrdquo Clinical Radiology vol 70 no 2 pp 168ndash175 2015

[21] Y Liu S Liu Y Cao and Z Wang ldquoAutomatic chessboardcorner detection methodrdquo IET Image Processing vol 10 no 1pp 16ndash23 2016

[22] Y Zhang S Deng Z Liu and Y Wang ldquoAesthetic QR CodesBased on Two-Stage Image Blendingrdquo inMultiMedia Modelingvol 8936 of Lecture Notes in Computer Science pp 183ndash194Springer International Publishing Cham 2015

[23] K Pulli M Tico and Y Xiong ldquoMobile panoramic imagingsystemrdquo in Proceedings of the 2010 IEEE Computer SocietyConference on Computer Vision and Pattern Recognition -Workshops CVPRW 2010 pp 108ndash115 USA June 2010

[24] X Zhang H Gao M Guo G Li Y Liu and D Li ldquoA study onkey technologies of unmanned drivingrdquo CAAI Transactions onIntelligence Technology vol 1 no 1 pp 4ndash13 2016

[25] Y Tang and J Shin ldquoImage Stitching with Efficient BrightnessFusion andAutomatic ContentAwarenessrdquo inProceedings of theInternational Conference on Signal Processing and MultimediaApplications pp 60ndash66 Vienna Austria August 2014

[26] H B Gao X Y Zhang T L Zhang Y C Liu and D Y LildquoResearch of intelligent vehicle variable granularity evaluationbased on cloud modelrdquo Acta Electronica Sinica vol 44 no 2pp 365ndash374 2016

[27] J-H Cha Y-S Jeon Y-S Moon and S-H Lee ldquoSeamless andfast panoramic image stitchingrdquo in Proceedings of the 2012 IEEEInternational Conference on Consumer Electronics ICCE 2012pp 29-30 USA January 2012

[28] J Liu H Ma X Ren and M Fu ldquoOptimal formation of robotsby convex hull and particle swarm optimizationrdquo in Proceedingsof the 2013 3rd IEEE Symposium on Computational Intelligencein Control and Automation CICA 2013 - 2013 IEEE SymposiumSeries on Computational Intelligence SSCI 2013 pp 104ndash111Singapore April 2013

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

Journal of Robotics 7

(a) Front view (b) Right view

(c) Back view (d) Left view

Figure 6 Fisheye images from each camera

(a) Front view (b) Right view

(c) Back view (d) Left view

Figure 7 Corresponding corrected images

8 Journal of Robotics

(a) Corner positions in distorted image (b) Corresponding positions in undistorted image

Figure 8 The corner detection and calculation result

(a) Before fusion (b) After fusion

Figure 9 Stitched bird view image of AVM

required corner is accurately and fully automatically detectedThe transition around the fusion seam is smooth with noobvious stitching trace However the images we processedin this experiment are static So in the future work we willtransplant this algorithm to development board for dynamicreal-time testing and try to apply the ring fusion method tomore other occasions

Conflicts of Interest

The authors declare no conflicts of interest

Acknowledgments

This work was supported by the National High TechnologyResearch and Development Program (ldquo973rdquo Program) ofChina under Grant no 2016YFB0100903 Beijing MunicipalScience and Technology Commission special major under

Grant nos D171100005017002 and D171100005117002 theNational Natural Science Foundation of China under Grantno U1664263 Junior Fellowships for Advanced InnovationThink-Tank Program of China Association for Science andTechnology under Grant no DXB-ZKQN-2017-035 and theproject funded by China Postdoctoral Science Foundationunder Grant no 2017M620765

References

[1] C Guo J Meguro Y Kojima and T Naito ldquoA MultimodalADAS System for Unmarked Urban Scenarios Based onRoad Context Understandingrdquo IEEE Transactions on IntelligentTransportation Systems vol 16 no 4 pp 1690ndash1704 2015

[2] A Pandey and U C Pati ldquoDevelopment of saliency-basedseamless image compositing using hybrid blending (SSICHB)rdquoIET Image Processing vol 11 no 6 pp 433ndash442 2017

[3] Ministry of Public Security TrafficAdministration Peoplersquos Repub-lic of China Road Traffic accident statistic annual report Jiangsu

Journal of Robotics 9

ProvinceWuxiMinistry of Public Security TrafficManagementScience Research Institute 2011

[4] S Lee S J Lee J Park and H J Kim ldquoExposure correction andimage blending for planar panorama stitchingrdquo in Proceedingsof the 16th International Conference on Control Automation andSystems ICCAS 2016 pp 128ndash131 kor October 2016

[5] H Ma M Wang M Fu and C Yang ldquoA New Discrete-timeGuidance Law Base on Trajectory Learning and Predictionrdquoin Proceedings of the AIAA Guidance Navigation and ControlConference Minneapolis Minnesota

[6] C-L Su C-J Lee M-S Li and K-P Chen ldquo3D AVMsystem for automotive applicationsrdquo in Proceedings of the 10thInternational Conference on Information Communications andSignal Processing ICICS 2015 Singapore December 2015

[7] F Tian and P Shi ldquoImage Mosaic using ORB descriptor andimproved blending algorithmrdquo in Proceedings of the 2014 7thInternational Congress on Image and Signal Processing CISP2014 pp 693ndash698 China October 2014

[8] S M Santhanam V Balisavira S H Roh and V K PandeyldquoLens distortion correction and geometrical alignment forAround View Monitoring systemrdquo in Proceedings of the 18thIEEE International Symposium on Consumer Electronics ISCE2014 Republic of Korea June 2014

[9] D Suru and S Karamchandani ldquoImage fusion in variableraster media for enhancement of graphic device interfacerdquo inProceedings of the 1st International Conference on ComputingCommunication Control and Automation ICCUBEA 2015 pp733ndash736 India February 2015

[10] C Yang H Ma B Xu and M Fu ldquoAdaptive control withnearest-neighbor previous instant compensation for discrete-time nonlinear strict-feedback systemsrdquo in Proceedings of the2012 American Control Conference ACC 2012 pp 1913ndash1918can June 2012

[11] Z Jiang J Wu D Cui et al ldquoStitching Method for DistortedImage Based on SIFT Feature Matchingrdquo in Proceedings ofthe International Conference on Computing and NetworkingTechnology pp 107ndash110 2013

[12] EMUpadhyay andNK Rana ldquoExposure fusion for concealedweapon detectionrdquo in Proceedings of the 2014 2nd InternationalConference on Devices Circuits and Systems ICDCS 2014 IndiaMarch 2014

[13] I Sipiran and B Bustos ldquoHarris 3D a robust extension of theHarris operator for interest point detection on 3D meshesrdquoTheVisual Computer vol 27 no 11 pp 963ndash976 2011

[14] Y Zhao and D Xu ldquoFast image blending using seeded regiongrowingrdquo Communications in Computer and Information Sci-ence vol 525 pp 408ndash415 2015

[15] Y-K Huo G Wei Y-D Zhang and L-N Wu ldquoAn adaptivethreshold for theCannyOperator of edge detectionrdquo inProceed-ings of the 2nd International Conference on Image Analysis andSignal Processing IASPrsquo2010 pp 371ndash374 China April 2010

[16] G Peljor and T Kondo ldquoA saturation-based image fusionmethod for static scenesrdquo in Proceedings of the 6th InternationalConference on Information and Communication Technology forEmbedded Systems IC-ICTES 2015 Thailand March 2015

[17] L Jiang J Liu D Li and Z Zhu ldquo3D point sets matchingmethod based on moravec vertical interest operatorrdquo Advancesin Intelligent and Soft Computing vol 144 no 1 pp 53ndash59 2012

[18] J Lang ldquoColor image encryption based on color blend andchaos permutation in the reality-preservingmultiple-parameterfractional Fourier transform domainrdquo Optics Communicationsvol 338 pp 181ndash192 2015

[19] M Rufli D Scaramuzza and R Siegwart ldquoAutomatic detectionof checkerboards on blurred and distorted imagesrdquo in Proceed-ings of the 2008 IEEERSJ International Conference on IntelligentRobots and Systems IROS pp 3121ndash3126 France September2008

[20] J-E Scholtz K Husers M Kaup et al ldquoNon-linear imageblending improves visualization of head and neck primarysquamous cell carcinoma compared to linear blending in dual-energy CTrdquo Clinical Radiology vol 70 no 2 pp 168ndash175 2015

[21] Y Liu S Liu Y Cao and Z Wang ldquoAutomatic chessboardcorner detection methodrdquo IET Image Processing vol 10 no 1pp 16ndash23 2016

[22] Y Zhang S Deng Z Liu and Y Wang ldquoAesthetic QR CodesBased on Two-Stage Image Blendingrdquo inMultiMedia Modelingvol 8936 of Lecture Notes in Computer Science pp 183ndash194Springer International Publishing Cham 2015

[23] K Pulli M Tico and Y Xiong ldquoMobile panoramic imagingsystemrdquo in Proceedings of the 2010 IEEE Computer SocietyConference on Computer Vision and Pattern Recognition -Workshops CVPRW 2010 pp 108ndash115 USA June 2010

[24] X Zhang H Gao M Guo G Li Y Liu and D Li ldquoA study onkey technologies of unmanned drivingrdquo CAAI Transactions onIntelligence Technology vol 1 no 1 pp 4ndash13 2016

[25] Y Tang and J Shin ldquoImage Stitching with Efficient BrightnessFusion andAutomatic ContentAwarenessrdquo inProceedings of theInternational Conference on Signal Processing and MultimediaApplications pp 60ndash66 Vienna Austria August 2014

[26] H B Gao X Y Zhang T L Zhang Y C Liu and D Y LildquoResearch of intelligent vehicle variable granularity evaluationbased on cloud modelrdquo Acta Electronica Sinica vol 44 no 2pp 365ndash374 2016

[27] J-H Cha Y-S Jeon Y-S Moon and S-H Lee ldquoSeamless andfast panoramic image stitchingrdquo in Proceedings of the 2012 IEEEInternational Conference on Consumer Electronics ICCE 2012pp 29-30 USA January 2012

[28] J Liu H Ma X Ren and M Fu ldquoOptimal formation of robotsby convex hull and particle swarm optimizationrdquo in Proceedingsof the 2013 3rd IEEE Symposium on Computational Intelligencein Control and Automation CICA 2013 - 2013 IEEE SymposiumSeries on Computational Intelligence SSCI 2013 pp 104ndash111Singapore April 2013

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

8 Journal of Robotics

(a) Corner positions in distorted image (b) Corresponding positions in undistorted image

Figure 8 The corner detection and calculation result

(a) Before fusion (b) After fusion

Figure 9 Stitched bird view image of AVM

required corner is accurately and fully automatically detectedThe transition around the fusion seam is smooth with noobvious stitching trace However the images we processedin this experiment are static So in the future work we willtransplant this algorithm to development board for dynamicreal-time testing and try to apply the ring fusion method tomore other occasions

Conflicts of Interest

The authors declare no conflicts of interest

Acknowledgments

This work was supported by the National High TechnologyResearch and Development Program (ldquo973rdquo Program) ofChina under Grant no 2016YFB0100903 Beijing MunicipalScience and Technology Commission special major under

Grant nos D171100005017002 and D171100005117002 theNational Natural Science Foundation of China under Grantno U1664263 Junior Fellowships for Advanced InnovationThink-Tank Program of China Association for Science andTechnology under Grant no DXB-ZKQN-2017-035 and theproject funded by China Postdoctoral Science Foundationunder Grant no 2017M620765

References

[1] C Guo J Meguro Y Kojima and T Naito ldquoA MultimodalADAS System for Unmarked Urban Scenarios Based onRoad Context Understandingrdquo IEEE Transactions on IntelligentTransportation Systems vol 16 no 4 pp 1690ndash1704 2015

[2] A Pandey and U C Pati ldquoDevelopment of saliency-basedseamless image compositing using hybrid blending (SSICHB)rdquoIET Image Processing vol 11 no 6 pp 433ndash442 2017

[3] Ministry of Public Security TrafficAdministration Peoplersquos Repub-lic of China Road Traffic accident statistic annual report Jiangsu

Journal of Robotics 9

ProvinceWuxiMinistry of Public Security TrafficManagementScience Research Institute 2011

[4] S Lee S J Lee J Park and H J Kim ldquoExposure correction andimage blending for planar panorama stitchingrdquo in Proceedingsof the 16th International Conference on Control Automation andSystems ICCAS 2016 pp 128ndash131 kor October 2016

[5] H Ma M Wang M Fu and C Yang ldquoA New Discrete-timeGuidance Law Base on Trajectory Learning and Predictionrdquoin Proceedings of the AIAA Guidance Navigation and ControlConference Minneapolis Minnesota

[6] C-L Su C-J Lee M-S Li and K-P Chen ldquo3D AVMsystem for automotive applicationsrdquo in Proceedings of the 10thInternational Conference on Information Communications andSignal Processing ICICS 2015 Singapore December 2015

[7] F Tian and P Shi ldquoImage Mosaic using ORB descriptor andimproved blending algorithmrdquo in Proceedings of the 2014 7thInternational Congress on Image and Signal Processing CISP2014 pp 693ndash698 China October 2014

[8] S M Santhanam V Balisavira S H Roh and V K PandeyldquoLens distortion correction and geometrical alignment forAround View Monitoring systemrdquo in Proceedings of the 18thIEEE International Symposium on Consumer Electronics ISCE2014 Republic of Korea June 2014

[9] D Suru and S Karamchandani ldquoImage fusion in variableraster media for enhancement of graphic device interfacerdquo inProceedings of the 1st International Conference on ComputingCommunication Control and Automation ICCUBEA 2015 pp733ndash736 India February 2015

[10] C Yang H Ma B Xu and M Fu ldquoAdaptive control withnearest-neighbor previous instant compensation for discrete-time nonlinear strict-feedback systemsrdquo in Proceedings of the2012 American Control Conference ACC 2012 pp 1913ndash1918can June 2012

[11] Z Jiang J Wu D Cui et al ldquoStitching Method for DistortedImage Based on SIFT Feature Matchingrdquo in Proceedings ofthe International Conference on Computing and NetworkingTechnology pp 107ndash110 2013

[12] EMUpadhyay andNK Rana ldquoExposure fusion for concealedweapon detectionrdquo in Proceedings of the 2014 2nd InternationalConference on Devices Circuits and Systems ICDCS 2014 IndiaMarch 2014

[13] I Sipiran and B Bustos ldquoHarris 3D a robust extension of theHarris operator for interest point detection on 3D meshesrdquoTheVisual Computer vol 27 no 11 pp 963ndash976 2011

[14] Y Zhao and D Xu ldquoFast image blending using seeded regiongrowingrdquo Communications in Computer and Information Sci-ence vol 525 pp 408ndash415 2015

[15] Y-K Huo G Wei Y-D Zhang and L-N Wu ldquoAn adaptivethreshold for theCannyOperator of edge detectionrdquo inProceed-ings of the 2nd International Conference on Image Analysis andSignal Processing IASPrsquo2010 pp 371ndash374 China April 2010

[16] G Peljor and T Kondo ldquoA saturation-based image fusionmethod for static scenesrdquo in Proceedings of the 6th InternationalConference on Information and Communication Technology forEmbedded Systems IC-ICTES 2015 Thailand March 2015

[17] L Jiang J Liu D Li and Z Zhu ldquo3D point sets matchingmethod based on moravec vertical interest operatorrdquo Advancesin Intelligent and Soft Computing vol 144 no 1 pp 53ndash59 2012

[18] J Lang ldquoColor image encryption based on color blend andchaos permutation in the reality-preservingmultiple-parameterfractional Fourier transform domainrdquo Optics Communicationsvol 338 pp 181ndash192 2015

[19] M Rufli D Scaramuzza and R Siegwart ldquoAutomatic detectionof checkerboards on blurred and distorted imagesrdquo in Proceed-ings of the 2008 IEEERSJ International Conference on IntelligentRobots and Systems IROS pp 3121ndash3126 France September2008

[20] J-E Scholtz K Husers M Kaup et al ldquoNon-linear imageblending improves visualization of head and neck primarysquamous cell carcinoma compared to linear blending in dual-energy CTrdquo Clinical Radiology vol 70 no 2 pp 168ndash175 2015

[21] Y Liu S Liu Y Cao and Z Wang ldquoAutomatic chessboardcorner detection methodrdquo IET Image Processing vol 10 no 1pp 16ndash23 2016

[22] Y Zhang S Deng Z Liu and Y Wang ldquoAesthetic QR CodesBased on Two-Stage Image Blendingrdquo inMultiMedia Modelingvol 8936 of Lecture Notes in Computer Science pp 183ndash194Springer International Publishing Cham 2015

[23] K Pulli M Tico and Y Xiong ldquoMobile panoramic imagingsystemrdquo in Proceedings of the 2010 IEEE Computer SocietyConference on Computer Vision and Pattern Recognition -Workshops CVPRW 2010 pp 108ndash115 USA June 2010

[24] X Zhang H Gao M Guo G Li Y Liu and D Li ldquoA study onkey technologies of unmanned drivingrdquo CAAI Transactions onIntelligence Technology vol 1 no 1 pp 4ndash13 2016

[25] Y Tang and J Shin ldquoImage Stitching with Efficient BrightnessFusion andAutomatic ContentAwarenessrdquo inProceedings of theInternational Conference on Signal Processing and MultimediaApplications pp 60ndash66 Vienna Austria August 2014

[26] H B Gao X Y Zhang T L Zhang Y C Liu and D Y LildquoResearch of intelligent vehicle variable granularity evaluationbased on cloud modelrdquo Acta Electronica Sinica vol 44 no 2pp 365ndash374 2016

[27] J-H Cha Y-S Jeon Y-S Moon and S-H Lee ldquoSeamless andfast panoramic image stitchingrdquo in Proceedings of the 2012 IEEEInternational Conference on Consumer Electronics ICCE 2012pp 29-30 USA January 2012

[28] J Liu H Ma X Ren and M Fu ldquoOptimal formation of robotsby convex hull and particle swarm optimizationrdquo in Proceedingsof the 2013 3rd IEEE Symposium on Computational Intelligencein Control and Automation CICA 2013 - 2013 IEEE SymposiumSeries on Computational Intelligence SSCI 2013 pp 104ndash111Singapore April 2013

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

Journal of Robotics 9

ProvinceWuxiMinistry of Public Security TrafficManagementScience Research Institute 2011

[4] S Lee S J Lee J Park and H J Kim ldquoExposure correction andimage blending for planar panorama stitchingrdquo in Proceedingsof the 16th International Conference on Control Automation andSystems ICCAS 2016 pp 128ndash131 kor October 2016

[5] H Ma M Wang M Fu and C Yang ldquoA New Discrete-timeGuidance Law Base on Trajectory Learning and Predictionrdquoin Proceedings of the AIAA Guidance Navigation and ControlConference Minneapolis Minnesota

[6] C-L Su C-J Lee M-S Li and K-P Chen ldquo3D AVMsystem for automotive applicationsrdquo in Proceedings of the 10thInternational Conference on Information Communications andSignal Processing ICICS 2015 Singapore December 2015

[7] F Tian and P Shi ldquoImage Mosaic using ORB descriptor andimproved blending algorithmrdquo in Proceedings of the 2014 7thInternational Congress on Image and Signal Processing CISP2014 pp 693ndash698 China October 2014

[8] S M Santhanam V Balisavira S H Roh and V K PandeyldquoLens distortion correction and geometrical alignment forAround View Monitoring systemrdquo in Proceedings of the 18thIEEE International Symposium on Consumer Electronics ISCE2014 Republic of Korea June 2014

[9] D Suru and S Karamchandani ldquoImage fusion in variableraster media for enhancement of graphic device interfacerdquo inProceedings of the 1st International Conference on ComputingCommunication Control and Automation ICCUBEA 2015 pp733ndash736 India February 2015

[10] C Yang H Ma B Xu and M Fu ldquoAdaptive control withnearest-neighbor previous instant compensation for discrete-time nonlinear strict-feedback systemsrdquo in Proceedings of the2012 American Control Conference ACC 2012 pp 1913ndash1918can June 2012

[11] Z Jiang J Wu D Cui et al ldquoStitching Method for DistortedImage Based on SIFT Feature Matchingrdquo in Proceedings ofthe International Conference on Computing and NetworkingTechnology pp 107ndash110 2013

[12] EMUpadhyay andNK Rana ldquoExposure fusion for concealedweapon detectionrdquo in Proceedings of the 2014 2nd InternationalConference on Devices Circuits and Systems ICDCS 2014 IndiaMarch 2014

[13] I Sipiran and B Bustos ldquoHarris 3D a robust extension of theHarris operator for interest point detection on 3D meshesrdquoTheVisual Computer vol 27 no 11 pp 963ndash976 2011

[14] Y Zhao and D Xu ldquoFast image blending using seeded regiongrowingrdquo Communications in Computer and Information Sci-ence vol 525 pp 408ndash415 2015

[15] Y-K Huo G Wei Y-D Zhang and L-N Wu ldquoAn adaptivethreshold for theCannyOperator of edge detectionrdquo inProceed-ings of the 2nd International Conference on Image Analysis andSignal Processing IASPrsquo2010 pp 371ndash374 China April 2010

[16] G Peljor and T Kondo ldquoA saturation-based image fusionmethod for static scenesrdquo in Proceedings of the 6th InternationalConference on Information and Communication Technology forEmbedded Systems IC-ICTES 2015 Thailand March 2015

[17] L Jiang J Liu D Li and Z Zhu ldquo3D point sets matchingmethod based on moravec vertical interest operatorrdquo Advancesin Intelligent and Soft Computing vol 144 no 1 pp 53ndash59 2012

[18] J Lang ldquoColor image encryption based on color blend andchaos permutation in the reality-preservingmultiple-parameterfractional Fourier transform domainrdquo Optics Communicationsvol 338 pp 181ndash192 2015

[19] M Rufli D Scaramuzza and R Siegwart ldquoAutomatic detectionof checkerboards on blurred and distorted imagesrdquo in Proceed-ings of the 2008 IEEERSJ International Conference on IntelligentRobots and Systems IROS pp 3121ndash3126 France September2008

[20] J-E Scholtz K Husers M Kaup et al ldquoNon-linear imageblending improves visualization of head and neck primarysquamous cell carcinoma compared to linear blending in dual-energy CTrdquo Clinical Radiology vol 70 no 2 pp 168ndash175 2015

[21] Y Liu S Liu Y Cao and Z Wang ldquoAutomatic chessboardcorner detection methodrdquo IET Image Processing vol 10 no 1pp 16ndash23 2016

[22] Y Zhang S Deng Z Liu and Y Wang ldquoAesthetic QR CodesBased on Two-Stage Image Blendingrdquo inMultiMedia Modelingvol 8936 of Lecture Notes in Computer Science pp 183ndash194Springer International Publishing Cham 2015

[23] K Pulli M Tico and Y Xiong ldquoMobile panoramic imagingsystemrdquo in Proceedings of the 2010 IEEE Computer SocietyConference on Computer Vision and Pattern Recognition -Workshops CVPRW 2010 pp 108ndash115 USA June 2010

[24] X Zhang H Gao M Guo G Li Y Liu and D Li ldquoA study onkey technologies of unmanned drivingrdquo CAAI Transactions onIntelligence Technology vol 1 no 1 pp 4ndash13 2016

[25] Y Tang and J Shin ldquoImage Stitching with Efficient BrightnessFusion andAutomatic ContentAwarenessrdquo inProceedings of theInternational Conference on Signal Processing and MultimediaApplications pp 60ndash66 Vienna Austria August 2014

[26] H B Gao X Y Zhang T L Zhang Y C Liu and D Y LildquoResearch of intelligent vehicle variable granularity evaluationbased on cloud modelrdquo Acta Electronica Sinica vol 44 no 2pp 365ndash374 2016

[27] J-H Cha Y-S Jeon Y-S Moon and S-H Lee ldquoSeamless andfast panoramic image stitchingrdquo in Proceedings of the 2012 IEEEInternational Conference on Consumer Electronics ICCE 2012pp 29-30 USA January 2012

[28] J Liu H Ma X Ren and M Fu ldquoOptimal formation of robotsby convex hull and particle swarm optimizationrdquo in Proceedingsof the 2013 3rd IEEE Symposium on Computational Intelligencein Control and Automation CICA 2013 - 2013 IEEE SymposiumSeries on Computational Intelligence SSCI 2013 pp 104ndash111Singapore April 2013

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom