High-Accuracy and Real-Time Indoor Positioning System ...

11
ResearchArticle High-Accuracy and Real-Time Indoor Positioning System Based on Visible Light Communication and Mobile Robot Xianmin Li , 1 Zihong Yan , 2 Linyi Huang , 2 Shihuan Chen , 2 and Manxi Liu 2 1 State Key Laboratory of Nuclear Power Safety Monitoring Technology and Equipment, China Nuclear Power Engineering Co.,Ltd, Shenzhen, Guangdong 518172, China 2 School of Automation Science and Engineering, South China University of Technology, Guangzhou 510640, China Correspondence should be addressed to Xianmin Li; [email protected] and Zihong Yan; [email protected] Received 22 August 2020; Accepted 21 October 2020; Published 5 November 2020 Academic Editor: Samir K Mondal Copyright © 2020 Xianmin Li et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For mobile robots and location-based services, precise and real-time positioning is one of the most basic capability, and low-cost positioning solutions are increasingly in demand and have broad market potential. In this paper, we innovatively design a high- accuracy and real-time indoor localization system based on visible light positioning (VLP) and mobile robot. First of all, we design smart LED lamps with VLC and Bluetooth control functions for positioning. e design of LED lamps includes hardware design and Bluetooth control. Furthermore, founded on the loose coupling characteristics of ROS (Robot Operator System), we design a VLP-based robot system with VLP information transmitted by designed LED, dynamic tracking algorithm of high robustness, LED-ID recognition algorithm, and triple-light positioning algorithm. We implemented the VLP-based robot positioning system on ROS in an office equipped with the designed LED lamps, which can realize cm-level positioning accuracy of 3.231cm and supportthemovingspeedupto20km/happroximately.ispaperpushesforwardthedevelopmentofVLPapplicationinindoor robots, showing the great potential of VLP for indoor robot positioning. 1. Introduction With the development of large-scale facilities such as un- derground parking lots and shopping malls, robots are re- quired to do more difficult and intelligent work, and the application scenarios of robots become more complex and diversified [1]. In order to improve the performance of the robot in the working process, precise and synchronous localization system for indoor settings is absolutely essential at present. Obviously, one of the key problems of mobile robots is how to enable them to learn the ability of au- tonomous navigation, which requires a more accurate and simultaneous positioning system. Previous research on ro- bot positioning has mainly focused on high-cost external sensors such as cameras, depth-sensing cameras, laser rangefinder, and complex algorithms [2–5], which require more computation to achieve higher accuracy. WiFi-based positioning is also a common solution for indoor posi- tioning, but even if the data of multiple access points are fused with particle filter, the positioning error is still more than 1 m [6]. As a wireless transmission technology, visible light communication (VLC) uses electrical signals to control the high-speed flashing LEDs to transmit information since the photosensitive device can detect the high-frequency flicker and restore it to the information to be transmitted. VLC technology not only has rich spectrum resources but also suffers little external interference, which can expand the spectrum of the next-generation broadband communication technology. Based on indoor VLC technology, VLP (visible light positioning) can realize indoor positioning by sending position information of LEDs to the positioning terminal. VLP can provide convenient data services for indoor users anytime and anywhere since visible light is closely related to people’s daily life and various places contain visible light source, for which VLP is superior to other positioning technologies in terms of hardware cost investment and portability. e positioning system by radio has limitations Hindawi International Journal of Optics Volume 2020, Article ID 3124970, 11 pages https://doi.org/10.1155/2020/3124970

Transcript of High-Accuracy and Real-Time Indoor Positioning System ...

Page 1: High-Accuracy and Real-Time Indoor Positioning System ...

Research ArticleHigh-Accuracy and Real-Time Indoor Positioning SystemBased on Visible Light Communication and Mobile Robot

Xianmin Li 1 Zihong Yan 2 Linyi Huang 2 Shihuan Chen 2 and Manxi Liu 2

1State Key Laboratory of Nuclear Power Safety Monitoring Technology and EquipmentChina Nuclear Power Engineering CoLtd Shenzhen Guangdong 518172 China2School of Automation Science and Engineering South China University of Technology Guangzhou 510640 China

Correspondence should be addressed to Xianmin Li lixianmincgnpccomcn and Zihong Yan 1963349419qqcom

Received 22 August 2020 Accepted 21 October 2020 Published 5 November 2020

Academic Editor Samir K Mondal

Copyright copy 2020 Xianmin Li et al is is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

For mobile robots and location-based services precise and real-time positioning is one of the most basic capability and low-costpositioning solutions are increasingly in demand and have broad market potential In this paper we innovatively design a high-accuracy and real-time indoor localization system based on visible light positioning (VLP) andmobile robot First of all we designsmart LED lamps with VLC and Bluetooth control functions for positioning e design of LED lamps includes hardware designand Bluetooth control Furthermore founded on the loose coupling characteristics of ROS (Robot Operator System) we design aVLP-based robot system with VLP information transmitted by designed LED dynamic tracking algorithm of high robustnessLED-ID recognition algorithm and triple-light positioning algorithm We implemented the VLP-based robot positioning systemon ROS in an office equipped with the designed LED lamps which can realize cm-level positioning accuracy of 3231 cm andsupport the moving speed up to 20 kmh approximatelyis paper pushes forward the development of VLP application in indoorrobots showing the great potential of VLP for indoor robot positioning

1 Introduction

With the development of large-scale facilities such as un-derground parking lots and shopping malls robots are re-quired to do more difficult and intelligent work and theapplication scenarios of robots become more complex anddiversified [1] In order to improve the performance of therobot in the working process precise and synchronouslocalization system for indoor settings is absolutely essentialat present Obviously one of the key problems of mobilerobots is how to enable them to learn the ability of au-tonomous navigation which requires a more accurate andsimultaneous positioning system Previous research on ro-bot positioning has mainly focused on high-cost externalsensors such as cameras depth-sensing cameras laserrangefinder and complex algorithms [2ndash5] which requiremore computation to achieve higher accuracy WiFi-basedpositioning is also a common solution for indoor posi-tioning but even if the data of multiple access points are

fused with particle filter the positioning error is still morethan 1m [6]

As a wireless transmission technology visible lightcommunication (VLC) uses electrical signals to control thehigh-speed flashing LEDs to transmit information since thephotosensitive device can detect the high-frequency flickerand restore it to the information to be transmitted VLCtechnology not only has rich spectrum resources but alsosuffers little external interference which can expand thespectrum of the next-generation broadband communicationtechnology Based on indoor VLC technology VLP (visiblelight positioning) can realize indoor positioning by sendingposition information of LEDs to the positioning terminalVLP can provide convenient data services for indoor usersanytime and anywhere since visible light is closely related topeoplersquos daily life and various places contain visible lightsource for which VLP is superior to other positioningtechnologies in terms of hardware cost investment andportability e positioning system by radio has limitations

HindawiInternational Journal of OpticsVolume 2020 Article ID 3124970 11 pageshttpsdoiorg10115520203124970

on use due to the generation of acoustic interference toindoor electronic equipment [7] while VLP has high real-time free of electromagnetic radiation and can be applied tomany scenes such as hospital and nuclear plant For ex-ample nuclear power plants require robots to work in highradiation high temperature and high pressure environ-ment for which VLP is well suited because of its hightransmission rate and anti-interference capability Fur-thermore it has better antimultipath ability in the indoorenvironment which makes it possible for the visible lightlocation to provide higher accuracy [8]

In general VLP broadly falls into two categories by re-ceiver type namely the photodiode-based and the camera-based Because of its high sensitivity to light photodiode (PD)can separate the light source in space and the positioningaccuracy is not affected by the interference of surroundinglight [9] However the positioning method based on PD willcause large errors due to angle measurement received signalstrength measurement light intensity change and otherreasons resulting in poor positioning effect [10] ereforesome work has been done to overcome this shortcoming In[11] visual information captured by the camera to estimatethe incidence angles of visible lights first is utilized and thevisual and strength information of visible light signals arecombined to improve the effect of location Reference [12]uses multiple PDs and machine learning which enable thesystem to localize accurately with two visual luminairesAlthough these efforts optimize the PD-based VLP system tosome extent they require the addition of additional sensorsBy contrast camera-based has higher stability and anti-in-terference ability and more commercial potential as high-demand image sensors for commercial terminals Severalprototypes of smartphone-based VLP systems have beendeveloped [13ndash16] but these systems yield relatively modestperformance usually not enough for practice e proposedVLP system with commercial smartphones in work [17] cansupportmoving speed up to 18 kmhework in [18] used atleast three LEDs to transmit their three-dimensional coor-dinate information which was received and demodulated bytwo image sensors near the unknown position and the un-known location is then calculated based on the geometry ofthe LED image created on the image sensor An angle ofarrival localization algorithm based on three or more LEDswas proposed in work [15] where a camera is regarded as anangle-of-arrival sensor and its average error is gt10 cm ework in [19 20] uses EKF-based fusion method to realizerobust visible light positioning system with an IMU and arolling-shutter camera which can reach centimeter-levelaccuracy e work in [10 19 21] first proposed VLP-basedsystem on robots however they all implemented the systemat the experimental level only

erefore in this paper we set up a robot localizationsystem in the office with high-accuracy and real-time po-sitioning that is an indoor robot positioning system basedon the robot operating system (ROS) and VLP which re-alizes the accuracy of 214 cm and support the moving speedup to 20 kmh It combines ROS and VLP and puts intopractice In addition we design the smart LED lamps withVLC function for positioning that can be controlled through

Bluetooth e rest of the paper is organized as followsSection 2 introduces the design of VLP-based robot systemImplementation and analysis of the VLP-based robot systemis discussed in Section 3 and Section 4 is the conclusion

2 The Design of the VLP-Based Robot System

e architecture of the VLP-based robot system is as shownin Figure 1 and the smart LED lamps with VLC and Blue-tooth control functions are used as transmitter for positioninginformation and illumination e camera installed verticallyon the robot captures image sequence and then through theVLC information obtained by dynamic LED-ROI trackingalgorithm and LED-ID recognition algorithm the robot usesthe location algorithm to locate its own position

21e Design of Smart LED Lamp e VLP takes the LEDlamps as the signal hotspot to send the location informationLED lamps are driven by VLP modulators that simulta-neously illuminate and broadcast their location signalsWhen the LED lights of the VLP modulator are installedeach lamp is assigned a unique identifier and the installedLED lights are associated with the corresponding coordi-nates in the positioning system database As the source oflocation information for the positioning system a LED lampwith visible light communication function that can be ad-justed through Bluetooth is described in this section

211 Hardware Design of Smart LED Lamp e self-designsmart LED lamp is provided with VLC function with theBluetooth cloud control model which makes it easy tochange ID code and location-based monitor By using visiblelight as the signal transmission carrier visible light com-munication equipment can be used as both illuminationdevice and signal source greatly reducing the cost ofequipment e hardware design of smart LED lamp isshown in Figure 2 and VLC controller is added between thepower supply (LED driver) and the LED lamp Using theBLE SoC as the microprogram controller of the controllerVLC datafrequency can be configured directly through theBLE wireless channel VLC control signal can be generatedthrough one of the IO pins of BLE SoC

212 Generation of Modulation Signals Based on BluetoothControl As shown in Figure 3 the hardware architecture ofa common BLE SOC consists of the following componentsmany SOCshave DMA controller especially which cantransfer data from peripheral devices such as ADC tomemory with minimal CPU intervention thus achievinghigh overall performance with high power efficiency

e universal serial peripheral interface (SPI) with DMAfunction is used in the proposed system instead as shown inFigure 4 DMA controller is set to repeat mode and SPIfrequency is adjusted according to VLC bit duration Afterbeing powered on all stored VLC data are loaded from flashmemory to the dedicated RAM area and its starting addressis configured as the source address of DMA controller After

2 International Journal of Optics

triggering the SPI module will start the continuous VLCtransmission without CPU intervention [22]

e basic architecture of smart LED lamps withBluetooth control is shown in Figure 5 Informationabout each lamprsquos VLC identifier (ID) and its Bluetoothaddress as well as its physical location is stored in thecloud [23] e position data is uploaded by Bluetoothand stored in the on-board data memory in the VLCcontroller en the memory generates a modulationpattern to modulate the LED lamps and broadcasts a

unique position identifier provided by each lampbreaking through the key points of indoor localizationUsers can shoot the LED light through the camera on asmartphone to obtain the unique VLC ID correspondingto the LED and then convert the received VLC ID into thecorresponding Bluetooth MAC address stored on theremote server or cloud and realize the further control ofLED lamps It also can get the actual location of the LEDwhich can be used as data for subsequent visible lightindoor positioning in the proposed system

Unique ID

Modulator

LED driver

LED luminaries

Acquisition of image

LED-ROI tracking

LED ID recognition

Triple-light positioning

Designed LED with VLC andbluetooth control functions

Mobile robot

VLC signal

Figure 1 e architecture of the VLP-based robot system

Powersource

Rectifying andstep-down

BLE

Changed VLCdata

IO port

BLE SoC

Switch

Adjust VLC datafrequency current

1 0 11 0 1 1 10

Figure 2 Block diagram of LED lamp

International Journal of Optics 3

22 Rolling Shutter Mechanism of the CMOS Image SensorBased on the modulated LED lamps the recognition of LED-ID is realized by using the image sensor-based VLP whichutilizes the rolling shutter mechanism of the CMOS imagesensor All pixels on the sensor are exposed at the same timeas the CCD sensor so at the end of each exposure the datafor all pixels is read out simultaneously is mechanism isoften referred to as the global shutter for the CCD sensorHowever for a CMOS sensor when exposure of one row is

completed the data of this row will be read out immediatelywhich means that the exposure and data reading are per-formed row by row is working mechanism is called therolling shutter mechanism of the CMOS sensor e LEDimage captured by the CMOS sensor would produce brightand dark stripes while turning the LED on and off during aperiod of exposure due to the rolling shutter mechanism ofCMOS sensor As shown in Figure 6 the LED image ac-quisition using CMOS sensor is illustrated

Analog peripherals

Digital peripherals

Timerscounters

DMA

Processor

OscillatorsPower

managementunit

AH

B bu

s

APB

bus

RF + modem+ LL privacy

GPIO

Memory(flashROMRAM)

Figure 3 Block diagram of BLE SoC

Powersource

Changed VLCdata

IO port

Switch

Adjust VLC datafrequency and current

Rectifying andstep-down

BLE SoC

BLE

1001

VLC IDVLC ID

BluetoothMAC address

ID1 Location Bluetooth MAC

ID2 Location Bluetooth MAC

IDn Location Bluetooth MAC1 0 11 0 1 1 10

Figure 5 Smart lighting system architecture

VLC data

1 0 11 0 1 1 10

RAM

Flash

DMA SPI IO

Load VLC datato the RAM once

Figure 4 VLC control signal generation

4 International Journal of Optics

23e Region of Interest (LED-ROI) Area Tracking Based onImprovedCAMshift-KalmanAlgorithm As far as we knowreal-time positioning of mobile robot requires real-timeshooting and processing of each image to obtain theregion of interest (ROI) area of the LED luminaire in theimage e success of the LED-ID detection and recog-nition method is inseparable from the accurate detectionof LED-ROI which determines the real-time perfor-mance and robustness of the system We use an improvedCAMshift-Kalman algorithm to improve the accuracyand robustness of the VLP-based system In order toobtain better tracking performance the Bhattacharyyacoefficient was used to update the observation noisematrix of Kalman filter in real time e CAMshift al-gorithm is used to track the output position of the targetas the measurement signal and Kalman filter algorithm isused to correct the target position e algorithm not onlycombines the CAMshift algorithm with the Kalman filterbut also introduces the Bhattacharyya coefficient Formore details one could refer to our previous work [24]We tested the effect of the algorithm in the dynamic casewith modulator tubes as background interference Underthe interference of LED tube this algorithm can stillensure the accurate detection of LED-ROI which reflectsrobustness and good real-time performance of the al-gorithm e dynamic tracking performance of the im-proved CAMshift-Kalman algorithm is shown inFigure 7

24 e LED-ID Recognition e LED-ID recognition inthis paper is to give certain characteristics to the light anddark stripes captured in the image By introducing fourcharacteristic variables frequency duty cycle distance and

phase difference coefficient the characteristics of LED-IDoptical strip code captured by CMOS image sensor are givene characteristic can be the number of light stripes of thestripe code in the LED pixel area the area of the LED pixelthe width of the bright stripe is greater than the width of thebright stripe and the width of the dark stripe and the phasedifference coefficient between the stripes en after theLED-ROI was obtained by the aforementioned algorithmthese features are extracted through simple image processingtechnology and the location information of LED-ID isrecognized through the preestablished database e detailscould be found in [25]

25 Triple-Light Positioning Algorithm As shown in Fig-ure 8 when three or more LEDs are detected in the vision ofthe image sensor the robot selects three LEDs for posi-tioning after information extraction through the algorithme coordinates of the LEDs are (x1 y1 z1) (x2 y2 z2) and(x3 y3 z3) Generally speaking the height of the ceiling is thesame in one place so z1 z2 z3 e position of the centerpoint O of the lens can be calculated by the similar trianglerelationship

erefore according to lens focal length f and the co-ordinates on the image plane (i j) the distance drsquo

Ok k 1 2 3between each LED image center and lens center O can becalculated

drsquoOk

f2

+ i2k + j

2k

1113969

(1)

And according to the similar triangle the distance dOk k

1 2 3 from the center O of the lens to the LED anchor can beobtained

dOk H times d

rsquoOk

f (2)

where H is the vertical distance between the LED and thelens plane

H fa

aprime f

b

bprime f

c

cprime (3)

en the coordinates of the lens center point O (x y z)can be calculated by the following formula

x minus x1( 11138572

+ y minus y1( 11138572

+(H)2

d2o1

x minus x2( 11138572

+ y minus y2( 11138572

+(H)2

d2o21113966

x minus x3( 11138572

+ y minus y3( 11138572

+(H)2

d2o3

(4)

e aforementioned formula can be converted to

x

y

⎡⎢⎢⎢⎣ ⎤⎥⎥⎥⎦ 12

x2 minus x1 y2 minus y1

x3 minus x1 y3 minus y1

⎡⎢⎢⎢⎣ ⎤⎥⎥⎥⎦

minus 1 d2o1 minus d

2o2 minus x

21 + y

21 minus x

22 minus y

221113872 1113873

d2o1 minus d

2o3 minus x

21 + y

21 minus x

23 minus y

231113872 1113873

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (5)

Row

Image

Time

Exposure time Read-out time

Frame rate

Figure 6 Rolling shutter mechanism of the CMOS image sensorand the LED image acquisition

International Journal of Optics 5

And the z coordinate is

z z1 minus H z2 minus H z3 minus H (6)

According to equations (5) and (6) we can calculate theposition of the camera lens center O Meanwhile the co-ordinates of the robot can be calculated through the staticconversion relationship between the camera and the robotbase e analysis of the algorithm could be found in ourprevious work [26]

3 Implementation of the VLP-BasedRobot System

31 System Setup and Implementation e system setup ofthe VLP-based robot system is shown in Table 1 Mobile robot(Turtlebot3) was used to build the VLP-based system plat-form e images of LEDs were shot by MindVision UB-300industrial camera which is fixed vertically on the robot byprior extrinsic calibration and transmitted by Raspberry Pi 3

Model B As shown in Figure 9 we implemented the system inthe office with three smart LED lamps installed on the ceilingfor positioning e related information of LED lamps isshown in Figure 9 In positioning process the image sensorcan detect at least three LEDs to ensure the realization of thetriple-light positioning Algorithm And for the weak pro-cessor performance of Raspberry Pi 3 Model B the programof image processing and location calculation is run on aremote controller e operating system of the Turtlebot3robot is Ubuntu 1604 MATE which corresponds to theKinetic version of ROS and the system of the remote con-troller is Ubuntu 1604 desktop We demonstrate the per-formance of the VLP robot system on ROS e camerainstalled vertically on the robot captures the image and thenobtains the location information stored by the LED imme-diately through the LED-ROI dynamic tracking algorithmand LED-ID recognition and finally realizes the centimeter-level location through the triple-light positioning algorithmPurple dots are used to represent the positioning result of thegeometric center of the camera obtained by VLP

Interference

LED-ROI obtained by thedynamic tracking algorithm

(a) (b)

(c) (d)

Figure 7 Dynamic tracking performance with modulator tube as background interference of the improved CAMshift-Kalman algorithm

6 International Journal of Optics

32 Positioning Accuracy of the VLP-Based Robot SystemIn order to evaluate the positioning accuracy of the systemtwo series of experiments were carried out e first serieswas used to test the performance of the stationary robot Asshown in Figure 10 uniformly distributed points in theexperimental area were randomly selected to calculate thestandard deviation between the measured position and the

actual position For the positioning accuracy shown inFigure 10 90 positioning errors are less than 3231 cm themaximum positioning error is no more than 5 cm and theaverage positioning error is 214 cm

We test the performance for moving mobile robot in thesecond series of experiments and we control the robot to gostraight and travel a distance after turning to demonstrate

LED1 (x1 y1 z1) LED2 (x2 y2 z2)

(x1 y1 z1)

(x3 y3 z3)

(x2 y2 z2)

LED3 (x3 y3 z3)

Camera

(i2 f2)

(i3 f3)

(i1 f1)

Image plane

H

f

a

b c

bprime

aprime

cprimeOprime

Okprime

LensO

dok

dprimeok

(i3 j3)

(i1 j1)(i2 j2)

Figure 8 e procedure of triple-light positioning

Table 1 Parameters in this paper

Camera specificationsModel MindVision UB-300Pixel (HtimesV) 2048times1536Time of exposure 002msType of shutter acquisition mode Electronic rolling shutterAcquisition mode Successive and soft trigger

Turtlebot3 robot specificationsModule Raspberry pi 3 BCPU Quad core 12 GHz broadcom BCM2837RAM 1GBOperating system Ubuntu mate 1604

Remote controller specificationsModule Acer VN7-593GCPU Quad core Intelreg Coretrade i7-7700HQOperating system Ubuntu 1604 LTS

System platform specificationsSize (LtimesWtimesH) 146times146times 285cm3

LED specificationsCoordinates of LED1 (cm) (13 159 285)Coordinates of LED2 (cm) (159 159 285)Coordinates of LED3 (cm) (159 13 285)e half-power angles of LEDdeg (ψ12) 60Current of each LED 300mARated power 18WOptical output 1500lm 6000KLED modulation frequency 5 kHzDiameter of each LED 150mm

International Journal of Optics 7

LED lamp

Laser radar

Industrial camera

Wheel

Raspbeery Pi

Open CR

Battery

Electricmachinery

TurtleBot3 withindustrial camera

(00)

x

y

LED3 5KHZ (15913)

LED2 3KHZ (159159)

LED1 1KHZ (13159)

Figure 9 VLP-based system implementation platform

80

60

40

20

0

4020

0ndash20

ndash40

200

ndash20ndash40

40

x (cm)y (cm)

z (cm

)

Real positionEstimated position

(a)

ndash40ndash40

ndash30 ndash20 ndash10 0 10 20 30 40

ndash30

ndash20

ndash10

0

10

20

30

40

y (cm

)

x (cm)

Real positionEstimated position

(b)

30

25

20

15

10

5

00 05 1 15 2 25 3 35 4 45 5

Error (cm)

e n

umbe

r of p

ositi

onin

g er

ror

(c)

1

09

08

07

06

05

04

03

02

01

00 1 2 3 4 5 6

Position Error (cm)

e c

df o

f pos

ition

erro

r (

) X 3231Y 0901

(d)

Figure 10 Stationary positioning of VLP-based robot system (a)e 3D positioning results (b)e horizontal view of the 3-D positioningresults (c) Histogram of the positioning error in the system (d) e cumulative distribution function (CDF) curves of positioning errors

8 International Journal of Optics

the real positioning effect of the localization system withdifferent speed As shown in Figure 11 the measured po-sitions of the robot are plotted as purple dots and the robotindustrial camera are highly coincident with the positioningresults in different motion states in the refined section of therobot which reflects the good positioning effect of the VLCpositioning system on ROS mobile robot Due to the re-strictions of ground field we let the robot travel at the speedof 2ndash4 cms However the moving speed of the robot has noeffect on the positioning accuracy but depends on the real-time positioning which is discussed in Section 33 It is worthmentioning that the position results in the coordinates of thecenter of the camera fixed vertically on the robot which canbe transformed to the center of the robot by coordinatetransformation in the VLP robot system

33 Real-Time Positioning Speed of the VLP-Based RobotSystem Location speed is another key factor in the VLP-based robot system which represents the mobile robotrsquosmoving speed when receiving VLC information andcalculating the current position in time erefore for theIS-based VLP system before the mobile robot passesthrough the VLP information transmitter it needs tocapture and extract the VLP information that is beforethe VLC light leaves the field of view (FOV) of the CMOSimage sensor erefore the maximum supportedmoving speed is the speed at which the mobile robotextracts the VLP information over a period of time from

the edge of the leftmost to the right We assume that thetarget LED is tangent to the left edge of the image in frameN and to the right edge of the image in frame N + 1 asshown in Figure 12 and the maximum velocity v of themobile robot is defined as v st where s is the distancethe terminal device moves between two frames and t isthe time from the beginning of the image capture to thecompletion of the positioning calculation Based on theproportional relationship between image coordinates andworld coordinates the relation is expressed as sr Ddwhere r is the pixel length of the image D is the actualdiameter of the LED and D is the diameter of the LED inthe image

As mentioned in Section 31 due to the weak pro-cessing capacity of Raspberry Pi 3 in Turtlebot3 most ofthe image process cannot be directly performed onRaspberry Pi erefore the images captured by theCMOS camera in the robot are transmitted to a remotelaptop for processing leading to positioning delays Weset the timer and counter in the program and the averagetime t is 03893 s As can be seen from Table 1 the actualdiameter D of LED is 150 mm e pixel length r of theimage is 800 (pixels) and the LED diameter d in theimage is 5554 (pixels) According to the definition ofspeed the maximum movement speed of the positioningterminal is 9497 ms vasymp 20 kmh If the robot isequipped with a small computer with better computingpower it can save the time it takes to transmit data andsupport higher moving speeds

Figure 11 e positioning effects for mobile robot moving along different tracks with different speeds of VLP-based robot system

Frame N

(a)

Frame N + 1

(b)

Figure 12 e relative position of the LED in two consecutive frames

International Journal of Optics 9

4 Conclusion

In this paper we design a high-accuracy and real-timeVLP-based robot system on ROS and implement theindoor VLP system based on TurtleBot3 robot platform inthe office as a practical application scenario which isequipped with designed LED lamps having VLC andBluetooth control function e implementation of thesystem shows that the positioning accuracy of the VLP-based system can reach cm-level position accuracy of214 cm and have a good real-time performance that cansupport the moving speed up to 20 kmh which has agreat research prospect in the autonomous positioningand navigation of indoor robots In the future we willapply the VLP-based system to the fusion localization orrobot navigation of ROS making full use of the advan-tages of VLP while making up for the defects of visiblelight positioning

Data Availability

e source code of our finished article is not open to theoutside world and the experimental data have been reflectedin the Experimental Analysis section

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Xianmin Li contributed to the conception of the studyperformed the data analysis and helped perform the analysiswith constructive discussions Zihong Yan contributedsignificantly to the analysis and manuscript preparation andwrote the manuscript Linyi Huang performed the experi-ment and analysis with constructive discussions and wrotethe manuscript Shihuan Chen helped revise the manuscriptManxi Liu helped perform the analysis with constructivediscussions Xianmin Li Zihong Yan and Linyi Huangcontributed equally to this work

References

[1] W Guan S Chen S Wen Z Tan H Song and W HouldquoHigh-accuracy robot indoor localization scheme based onrobot operating system using visible light positioningrdquo IEEEPhotonics Journal vol 12 no 2 pp 1ndash16 2020

[2] J Civera A J Davison and J Montiel ldquoInverse depth pa-rametrization for monocular slamrdquo IEEE Transactions onRobotics vol 24 no 5 p 932 2008

[3] M Liu R Siegwart and D P Fact ldquoTowards topologicalmapping and scene recognition with color for omnidirec-tional camerardquo in Proceedings of the International Conferenceon Robotics and Automation (ICRA) Saint Paul MN USAMay 2012

[4] P Henry M Krainin E Herbst X Ren and D Fox ldquoRGB-Dmapping using depth cameras for dense 3D modeling ofindoor environmentsrdquo in Proceedings of the 12th Interna-tional Symposium on Experimental Robotics ISER 2010 NewDelhi India December 2010

[5] M Liu F Pomerleau F Colas and R Siegwart ldquoNormalestimation for pointcloud using GPU based sparse tensorvotingrdquo in Proceedings of the IEEE International Conferenceon Robotics and Biomimetics (ROBIO) Shenzhen ChinaDecember 2013

[6] R Miyagusuku A Yamashita and H Asama ldquoData infor-mation fusion from multiple access points for WiFi-basedself-localizationrdquo IEEE Robotics and Automation Lettersvol 4 no 2 pp 269ndash276 2019

[7] H Chen W Guan S Li and Y Wu ldquoIndoor high precisionthree-dimensional positioning system based on visible lightcommunication using modified genetic algorithmrdquo OpticsCommunications vol 413 pp 103ndash120 2018

[8] W Guan ldquoA novel three-dimensional indoor positioningalgorithm design based on visible light communicationrdquoOptics Communication vol 392 pp 282ndash293 2017

[9] S Juneja and S Vashisth ldquoIndoor positioning system usingvisible light communicationrdquo in Proceedings of the 2017 In-ternational Conference on Computing and CommunicationTechnologies for Smart Nation (IC3TSN) Gurgaon IndiaOctober 2017

[10] S Chen W Guan Z Tan et al ldquoHigh accuracy and erroranalysis of indoor visible light positioning algorithm based onimage sensorrdquo 2019 httparxivorgabs191111773

[11] L Bai Y Yang C Feng et al ldquoAn enhanced camera assistedreceived signal strength ratio algorithm for indoor visible lightpositioningrdquo in Proceedings of the 2020 IEEE InternationalConference On Communications Workshops (Icc Workshops)Dublin Ireland June 2020

[12] A H A Bakar T Glass H Y Tee F Alam and M LeggldquoAccurate visible light positioning using multiple photodiodereceiver and machine learningrdquo IEEE Transactions on In-strumentation and Measurement 2020

[13] L Li P Hu C Peng G Shen and F Zhao ldquoEpsilon a visiblelight based positioning systemrdquo in Proceedings of the 11thUSENIX Conference on Networked Systems Design andImplementation Berkeley CA USA April 2014

[14] N Rajagopal P Lazik and A Rowe ldquoVisual light landmarksfor mobile devicesrdquo in Proceedings of the 13th InternationalSymposium Information Processing in Sensor Networks BerlinGermany April 2014

[15] Y-S Kuo P Pannuto K-J Hsiao and P Dutta ldquoLuxaposeindoor positioning with mobile phones and visible lightrdquo inProceedings of the 20th Annual International Conference onHigh Performance Computing Goa India December 2014

[16] Z Yang Z Wang J Zhang C Huang and Q ZhangldquoWearables can afford light-weight indoor positioning withvisible lightrdquo in Proceedings of the 13th Annual InternationalConference on Mobile Systems Applications and ServicesFlorence Italy May 2015

[17] J Fang Z Yang S Long et al ldquoHigh-speed indoor navigationsystem based on visible light and mobile phonerdquo IEEEPhotonics Journal vol 9 no 2 pp 1ndash11 2017

[18] M S Rahman M M Haque and K D Kim ldquoHigh precisionindoor positioning using lighting LED and image sensorrdquo inProceedings of the International Conference on Computer andInformation Technology Dhaka Bangladesh December 2011

[19] Q Liang J Lin and M Liu ldquoTowards robust visible lightpositioning under LED shortage by visual-inertial fusionrdquo inProceedings of the 2019 International Conference on IndoorPositioning and Indoor Navigation (IPIN) Pisa Italy October2019

10 International Journal of Optics

[20] Q Liang and M Liu ldquoA tightly coupled VLC-inertial lo-calization system by EKFrdquo IEEE Robotics and AutomationLetters vol 5 no 2 pp 3129ndash3136 2020

[21] W Guan Research on High Precision Indoor Visible LightLocalization Algorithm Based on Image sensor Masterrsquosesis South China University of Technology GuangdongProvince China 2019

[22] C Qiu B Hussain and C P Yue ldquoBluetooth based wirelesscontrol for iBeacon and VLC enabled lightingrdquo in Proceedingsof the 2019 IEEE 8th Global Conference on Consumer Elec-tronics (GCCE) Osaka Japan October 2019

[23] B Hussain C Qiu and C P Yue ldquoSmart lighting control andservices using visible light communication and Bluetoothrdquo inProceedings of the 2019 IEEE 8th Global Conference onConsumer Electronics (GCCE) Osaka Japan October 2019

[24] W Guan Z Liu S Wen H Xie and X Zhang ldquoVisible lightdynamic positioning method using improved camshift-kal-man algorithmrdquo IEEE Photonics Journal vol 11 no 6pp 1ndash22 2019

[25] C Xie ldquoe LED-ID detection and recognition method basedon visible light positioning using proximity methodrdquo IEEEPhotonics vol 10 no 2 p 7902116 2018

[26] G Weipeng S Wen L Liu and H Zhang ldquoHigh-precisionindoor positioning algorithm based on visible light com-munication using complementary metal-ndashoxidendashsemiconductor image sensorrdquo Optical Engineeringvol 58 p 1 2019

International Journal of Optics 11

Page 2: High-Accuracy and Real-Time Indoor Positioning System ...

on use due to the generation of acoustic interference toindoor electronic equipment [7] while VLP has high real-time free of electromagnetic radiation and can be applied tomany scenes such as hospital and nuclear plant For ex-ample nuclear power plants require robots to work in highradiation high temperature and high pressure environ-ment for which VLP is well suited because of its hightransmission rate and anti-interference capability Fur-thermore it has better antimultipath ability in the indoorenvironment which makes it possible for the visible lightlocation to provide higher accuracy [8]

In general VLP broadly falls into two categories by re-ceiver type namely the photodiode-based and the camera-based Because of its high sensitivity to light photodiode (PD)can separate the light source in space and the positioningaccuracy is not affected by the interference of surroundinglight [9] However the positioning method based on PD willcause large errors due to angle measurement received signalstrength measurement light intensity change and otherreasons resulting in poor positioning effect [10] ereforesome work has been done to overcome this shortcoming In[11] visual information captured by the camera to estimatethe incidence angles of visible lights first is utilized and thevisual and strength information of visible light signals arecombined to improve the effect of location Reference [12]uses multiple PDs and machine learning which enable thesystem to localize accurately with two visual luminairesAlthough these efforts optimize the PD-based VLP system tosome extent they require the addition of additional sensorsBy contrast camera-based has higher stability and anti-in-terference ability and more commercial potential as high-demand image sensors for commercial terminals Severalprototypes of smartphone-based VLP systems have beendeveloped [13ndash16] but these systems yield relatively modestperformance usually not enough for practice e proposedVLP system with commercial smartphones in work [17] cansupportmoving speed up to 18 kmhework in [18] used atleast three LEDs to transmit their three-dimensional coor-dinate information which was received and demodulated bytwo image sensors near the unknown position and the un-known location is then calculated based on the geometry ofthe LED image created on the image sensor An angle ofarrival localization algorithm based on three or more LEDswas proposed in work [15] where a camera is regarded as anangle-of-arrival sensor and its average error is gt10 cm ework in [19 20] uses EKF-based fusion method to realizerobust visible light positioning system with an IMU and arolling-shutter camera which can reach centimeter-levelaccuracy e work in [10 19 21] first proposed VLP-basedsystem on robots however they all implemented the systemat the experimental level only

erefore in this paper we set up a robot localizationsystem in the office with high-accuracy and real-time po-sitioning that is an indoor robot positioning system basedon the robot operating system (ROS) and VLP which re-alizes the accuracy of 214 cm and support the moving speedup to 20 kmh It combines ROS and VLP and puts intopractice In addition we design the smart LED lamps withVLC function for positioning that can be controlled through

Bluetooth e rest of the paper is organized as followsSection 2 introduces the design of VLP-based robot systemImplementation and analysis of the VLP-based robot systemis discussed in Section 3 and Section 4 is the conclusion

2 The Design of the VLP-Based Robot System

e architecture of the VLP-based robot system is as shownin Figure 1 and the smart LED lamps with VLC and Blue-tooth control functions are used as transmitter for positioninginformation and illumination e camera installed verticallyon the robot captures image sequence and then through theVLC information obtained by dynamic LED-ROI trackingalgorithm and LED-ID recognition algorithm the robot usesthe location algorithm to locate its own position

21e Design of Smart LED Lamp e VLP takes the LEDlamps as the signal hotspot to send the location informationLED lamps are driven by VLP modulators that simulta-neously illuminate and broadcast their location signalsWhen the LED lights of the VLP modulator are installedeach lamp is assigned a unique identifier and the installedLED lights are associated with the corresponding coordi-nates in the positioning system database As the source oflocation information for the positioning system a LED lampwith visible light communication function that can be ad-justed through Bluetooth is described in this section

211 Hardware Design of Smart LED Lamp e self-designsmart LED lamp is provided with VLC function with theBluetooth cloud control model which makes it easy tochange ID code and location-based monitor By using visiblelight as the signal transmission carrier visible light com-munication equipment can be used as both illuminationdevice and signal source greatly reducing the cost ofequipment e hardware design of smart LED lamp isshown in Figure 2 and VLC controller is added between thepower supply (LED driver) and the LED lamp Using theBLE SoC as the microprogram controller of the controllerVLC datafrequency can be configured directly through theBLE wireless channel VLC control signal can be generatedthrough one of the IO pins of BLE SoC

212 Generation of Modulation Signals Based on BluetoothControl As shown in Figure 3 the hardware architecture ofa common BLE SOC consists of the following componentsmany SOCshave DMA controller especially which cantransfer data from peripheral devices such as ADC tomemory with minimal CPU intervention thus achievinghigh overall performance with high power efficiency

e universal serial peripheral interface (SPI) with DMAfunction is used in the proposed system instead as shown inFigure 4 DMA controller is set to repeat mode and SPIfrequency is adjusted according to VLC bit duration Afterbeing powered on all stored VLC data are loaded from flashmemory to the dedicated RAM area and its starting addressis configured as the source address of DMA controller After

2 International Journal of Optics

triggering the SPI module will start the continuous VLCtransmission without CPU intervention [22]

e basic architecture of smart LED lamps withBluetooth control is shown in Figure 5 Informationabout each lamprsquos VLC identifier (ID) and its Bluetoothaddress as well as its physical location is stored in thecloud [23] e position data is uploaded by Bluetoothand stored in the on-board data memory in the VLCcontroller en the memory generates a modulationpattern to modulate the LED lamps and broadcasts a

unique position identifier provided by each lampbreaking through the key points of indoor localizationUsers can shoot the LED light through the camera on asmartphone to obtain the unique VLC ID correspondingto the LED and then convert the received VLC ID into thecorresponding Bluetooth MAC address stored on theremote server or cloud and realize the further control ofLED lamps It also can get the actual location of the LEDwhich can be used as data for subsequent visible lightindoor positioning in the proposed system

Unique ID

Modulator

LED driver

LED luminaries

Acquisition of image

LED-ROI tracking

LED ID recognition

Triple-light positioning

Designed LED with VLC andbluetooth control functions

Mobile robot

VLC signal

Figure 1 e architecture of the VLP-based robot system

Powersource

Rectifying andstep-down

BLE

Changed VLCdata

IO port

BLE SoC

Switch

Adjust VLC datafrequency current

1 0 11 0 1 1 10

Figure 2 Block diagram of LED lamp

International Journal of Optics 3

22 Rolling Shutter Mechanism of the CMOS Image SensorBased on the modulated LED lamps the recognition of LED-ID is realized by using the image sensor-based VLP whichutilizes the rolling shutter mechanism of the CMOS imagesensor All pixels on the sensor are exposed at the same timeas the CCD sensor so at the end of each exposure the datafor all pixels is read out simultaneously is mechanism isoften referred to as the global shutter for the CCD sensorHowever for a CMOS sensor when exposure of one row is

completed the data of this row will be read out immediatelywhich means that the exposure and data reading are per-formed row by row is working mechanism is called therolling shutter mechanism of the CMOS sensor e LEDimage captured by the CMOS sensor would produce brightand dark stripes while turning the LED on and off during aperiod of exposure due to the rolling shutter mechanism ofCMOS sensor As shown in Figure 6 the LED image ac-quisition using CMOS sensor is illustrated

Analog peripherals

Digital peripherals

Timerscounters

DMA

Processor

OscillatorsPower

managementunit

AH

B bu

s

APB

bus

RF + modem+ LL privacy

GPIO

Memory(flashROMRAM)

Figure 3 Block diagram of BLE SoC

Powersource

Changed VLCdata

IO port

Switch

Adjust VLC datafrequency and current

Rectifying andstep-down

BLE SoC

BLE

1001

VLC IDVLC ID

BluetoothMAC address

ID1 Location Bluetooth MAC

ID2 Location Bluetooth MAC

IDn Location Bluetooth MAC1 0 11 0 1 1 10

Figure 5 Smart lighting system architecture

VLC data

1 0 11 0 1 1 10

RAM

Flash

DMA SPI IO

Load VLC datato the RAM once

Figure 4 VLC control signal generation

4 International Journal of Optics

23e Region of Interest (LED-ROI) Area Tracking Based onImprovedCAMshift-KalmanAlgorithm As far as we knowreal-time positioning of mobile robot requires real-timeshooting and processing of each image to obtain theregion of interest (ROI) area of the LED luminaire in theimage e success of the LED-ID detection and recog-nition method is inseparable from the accurate detectionof LED-ROI which determines the real-time perfor-mance and robustness of the system We use an improvedCAMshift-Kalman algorithm to improve the accuracyand robustness of the VLP-based system In order toobtain better tracking performance the Bhattacharyyacoefficient was used to update the observation noisematrix of Kalman filter in real time e CAMshift al-gorithm is used to track the output position of the targetas the measurement signal and Kalman filter algorithm isused to correct the target position e algorithm not onlycombines the CAMshift algorithm with the Kalman filterbut also introduces the Bhattacharyya coefficient Formore details one could refer to our previous work [24]We tested the effect of the algorithm in the dynamic casewith modulator tubes as background interference Underthe interference of LED tube this algorithm can stillensure the accurate detection of LED-ROI which reflectsrobustness and good real-time performance of the al-gorithm e dynamic tracking performance of the im-proved CAMshift-Kalman algorithm is shown inFigure 7

24 e LED-ID Recognition e LED-ID recognition inthis paper is to give certain characteristics to the light anddark stripes captured in the image By introducing fourcharacteristic variables frequency duty cycle distance and

phase difference coefficient the characteristics of LED-IDoptical strip code captured by CMOS image sensor are givene characteristic can be the number of light stripes of thestripe code in the LED pixel area the area of the LED pixelthe width of the bright stripe is greater than the width of thebright stripe and the width of the dark stripe and the phasedifference coefficient between the stripes en after theLED-ROI was obtained by the aforementioned algorithmthese features are extracted through simple image processingtechnology and the location information of LED-ID isrecognized through the preestablished database e detailscould be found in [25]

25 Triple-Light Positioning Algorithm As shown in Fig-ure 8 when three or more LEDs are detected in the vision ofthe image sensor the robot selects three LEDs for posi-tioning after information extraction through the algorithme coordinates of the LEDs are (x1 y1 z1) (x2 y2 z2) and(x3 y3 z3) Generally speaking the height of the ceiling is thesame in one place so z1 z2 z3 e position of the centerpoint O of the lens can be calculated by the similar trianglerelationship

erefore according to lens focal length f and the co-ordinates on the image plane (i j) the distance drsquo

Ok k 1 2 3between each LED image center and lens center O can becalculated

drsquoOk

f2

+ i2k + j

2k

1113969

(1)

And according to the similar triangle the distance dOk k

1 2 3 from the center O of the lens to the LED anchor can beobtained

dOk H times d

rsquoOk

f (2)

where H is the vertical distance between the LED and thelens plane

H fa

aprime f

b

bprime f

c

cprime (3)

en the coordinates of the lens center point O (x y z)can be calculated by the following formula

x minus x1( 11138572

+ y minus y1( 11138572

+(H)2

d2o1

x minus x2( 11138572

+ y minus y2( 11138572

+(H)2

d2o21113966

x minus x3( 11138572

+ y minus y3( 11138572

+(H)2

d2o3

(4)

e aforementioned formula can be converted to

x

y

⎡⎢⎢⎢⎣ ⎤⎥⎥⎥⎦ 12

x2 minus x1 y2 minus y1

x3 minus x1 y3 minus y1

⎡⎢⎢⎢⎣ ⎤⎥⎥⎥⎦

minus 1 d2o1 minus d

2o2 minus x

21 + y

21 minus x

22 minus y

221113872 1113873

d2o1 minus d

2o3 minus x

21 + y

21 minus x

23 minus y

231113872 1113873

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (5)

Row

Image

Time

Exposure time Read-out time

Frame rate

Figure 6 Rolling shutter mechanism of the CMOS image sensorand the LED image acquisition

International Journal of Optics 5

And the z coordinate is

z z1 minus H z2 minus H z3 minus H (6)

According to equations (5) and (6) we can calculate theposition of the camera lens center O Meanwhile the co-ordinates of the robot can be calculated through the staticconversion relationship between the camera and the robotbase e analysis of the algorithm could be found in ourprevious work [26]

3 Implementation of the VLP-BasedRobot System

31 System Setup and Implementation e system setup ofthe VLP-based robot system is shown in Table 1 Mobile robot(Turtlebot3) was used to build the VLP-based system plat-form e images of LEDs were shot by MindVision UB-300industrial camera which is fixed vertically on the robot byprior extrinsic calibration and transmitted by Raspberry Pi 3

Model B As shown in Figure 9 we implemented the system inthe office with three smart LED lamps installed on the ceilingfor positioning e related information of LED lamps isshown in Figure 9 In positioning process the image sensorcan detect at least three LEDs to ensure the realization of thetriple-light positioning Algorithm And for the weak pro-cessor performance of Raspberry Pi 3 Model B the programof image processing and location calculation is run on aremote controller e operating system of the Turtlebot3robot is Ubuntu 1604 MATE which corresponds to theKinetic version of ROS and the system of the remote con-troller is Ubuntu 1604 desktop We demonstrate the per-formance of the VLP robot system on ROS e camerainstalled vertically on the robot captures the image and thenobtains the location information stored by the LED imme-diately through the LED-ROI dynamic tracking algorithmand LED-ID recognition and finally realizes the centimeter-level location through the triple-light positioning algorithmPurple dots are used to represent the positioning result of thegeometric center of the camera obtained by VLP

Interference

LED-ROI obtained by thedynamic tracking algorithm

(a) (b)

(c) (d)

Figure 7 Dynamic tracking performance with modulator tube as background interference of the improved CAMshift-Kalman algorithm

6 International Journal of Optics

32 Positioning Accuracy of the VLP-Based Robot SystemIn order to evaluate the positioning accuracy of the systemtwo series of experiments were carried out e first serieswas used to test the performance of the stationary robot Asshown in Figure 10 uniformly distributed points in theexperimental area were randomly selected to calculate thestandard deviation between the measured position and the

actual position For the positioning accuracy shown inFigure 10 90 positioning errors are less than 3231 cm themaximum positioning error is no more than 5 cm and theaverage positioning error is 214 cm

We test the performance for moving mobile robot in thesecond series of experiments and we control the robot to gostraight and travel a distance after turning to demonstrate

LED1 (x1 y1 z1) LED2 (x2 y2 z2)

(x1 y1 z1)

(x3 y3 z3)

(x2 y2 z2)

LED3 (x3 y3 z3)

Camera

(i2 f2)

(i3 f3)

(i1 f1)

Image plane

H

f

a

b c

bprime

aprime

cprimeOprime

Okprime

LensO

dok

dprimeok

(i3 j3)

(i1 j1)(i2 j2)

Figure 8 e procedure of triple-light positioning

Table 1 Parameters in this paper

Camera specificationsModel MindVision UB-300Pixel (HtimesV) 2048times1536Time of exposure 002msType of shutter acquisition mode Electronic rolling shutterAcquisition mode Successive and soft trigger

Turtlebot3 robot specificationsModule Raspberry pi 3 BCPU Quad core 12 GHz broadcom BCM2837RAM 1GBOperating system Ubuntu mate 1604

Remote controller specificationsModule Acer VN7-593GCPU Quad core Intelreg Coretrade i7-7700HQOperating system Ubuntu 1604 LTS

System platform specificationsSize (LtimesWtimesH) 146times146times 285cm3

LED specificationsCoordinates of LED1 (cm) (13 159 285)Coordinates of LED2 (cm) (159 159 285)Coordinates of LED3 (cm) (159 13 285)e half-power angles of LEDdeg (ψ12) 60Current of each LED 300mARated power 18WOptical output 1500lm 6000KLED modulation frequency 5 kHzDiameter of each LED 150mm

International Journal of Optics 7

LED lamp

Laser radar

Industrial camera

Wheel

Raspbeery Pi

Open CR

Battery

Electricmachinery

TurtleBot3 withindustrial camera

(00)

x

y

LED3 5KHZ (15913)

LED2 3KHZ (159159)

LED1 1KHZ (13159)

Figure 9 VLP-based system implementation platform

80

60

40

20

0

4020

0ndash20

ndash40

200

ndash20ndash40

40

x (cm)y (cm)

z (cm

)

Real positionEstimated position

(a)

ndash40ndash40

ndash30 ndash20 ndash10 0 10 20 30 40

ndash30

ndash20

ndash10

0

10

20

30

40

y (cm

)

x (cm)

Real positionEstimated position

(b)

30

25

20

15

10

5

00 05 1 15 2 25 3 35 4 45 5

Error (cm)

e n

umbe

r of p

ositi

onin

g er

ror

(c)

1

09

08

07

06

05

04

03

02

01

00 1 2 3 4 5 6

Position Error (cm)

e c

df o

f pos

ition

erro

r (

) X 3231Y 0901

(d)

Figure 10 Stationary positioning of VLP-based robot system (a)e 3D positioning results (b)e horizontal view of the 3-D positioningresults (c) Histogram of the positioning error in the system (d) e cumulative distribution function (CDF) curves of positioning errors

8 International Journal of Optics

the real positioning effect of the localization system withdifferent speed As shown in Figure 11 the measured po-sitions of the robot are plotted as purple dots and the robotindustrial camera are highly coincident with the positioningresults in different motion states in the refined section of therobot which reflects the good positioning effect of the VLCpositioning system on ROS mobile robot Due to the re-strictions of ground field we let the robot travel at the speedof 2ndash4 cms However the moving speed of the robot has noeffect on the positioning accuracy but depends on the real-time positioning which is discussed in Section 33 It is worthmentioning that the position results in the coordinates of thecenter of the camera fixed vertically on the robot which canbe transformed to the center of the robot by coordinatetransformation in the VLP robot system

33 Real-Time Positioning Speed of the VLP-Based RobotSystem Location speed is another key factor in the VLP-based robot system which represents the mobile robotrsquosmoving speed when receiving VLC information andcalculating the current position in time erefore for theIS-based VLP system before the mobile robot passesthrough the VLP information transmitter it needs tocapture and extract the VLP information that is beforethe VLC light leaves the field of view (FOV) of the CMOSimage sensor erefore the maximum supportedmoving speed is the speed at which the mobile robotextracts the VLP information over a period of time from

the edge of the leftmost to the right We assume that thetarget LED is tangent to the left edge of the image in frameN and to the right edge of the image in frame N + 1 asshown in Figure 12 and the maximum velocity v of themobile robot is defined as v st where s is the distancethe terminal device moves between two frames and t isthe time from the beginning of the image capture to thecompletion of the positioning calculation Based on theproportional relationship between image coordinates andworld coordinates the relation is expressed as sr Ddwhere r is the pixel length of the image D is the actualdiameter of the LED and D is the diameter of the LED inthe image

As mentioned in Section 31 due to the weak pro-cessing capacity of Raspberry Pi 3 in Turtlebot3 most ofthe image process cannot be directly performed onRaspberry Pi erefore the images captured by theCMOS camera in the robot are transmitted to a remotelaptop for processing leading to positioning delays Weset the timer and counter in the program and the averagetime t is 03893 s As can be seen from Table 1 the actualdiameter D of LED is 150 mm e pixel length r of theimage is 800 (pixels) and the LED diameter d in theimage is 5554 (pixels) According to the definition ofspeed the maximum movement speed of the positioningterminal is 9497 ms vasymp 20 kmh If the robot isequipped with a small computer with better computingpower it can save the time it takes to transmit data andsupport higher moving speeds

Figure 11 e positioning effects for mobile robot moving along different tracks with different speeds of VLP-based robot system

Frame N

(a)

Frame N + 1

(b)

Figure 12 e relative position of the LED in two consecutive frames

International Journal of Optics 9

4 Conclusion

In this paper we design a high-accuracy and real-timeVLP-based robot system on ROS and implement theindoor VLP system based on TurtleBot3 robot platform inthe office as a practical application scenario which isequipped with designed LED lamps having VLC andBluetooth control function e implementation of thesystem shows that the positioning accuracy of the VLP-based system can reach cm-level position accuracy of214 cm and have a good real-time performance that cansupport the moving speed up to 20 kmh which has agreat research prospect in the autonomous positioningand navigation of indoor robots In the future we willapply the VLP-based system to the fusion localization orrobot navigation of ROS making full use of the advan-tages of VLP while making up for the defects of visiblelight positioning

Data Availability

e source code of our finished article is not open to theoutside world and the experimental data have been reflectedin the Experimental Analysis section

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Xianmin Li contributed to the conception of the studyperformed the data analysis and helped perform the analysiswith constructive discussions Zihong Yan contributedsignificantly to the analysis and manuscript preparation andwrote the manuscript Linyi Huang performed the experi-ment and analysis with constructive discussions and wrotethe manuscript Shihuan Chen helped revise the manuscriptManxi Liu helped perform the analysis with constructivediscussions Xianmin Li Zihong Yan and Linyi Huangcontributed equally to this work

References

[1] W Guan S Chen S Wen Z Tan H Song and W HouldquoHigh-accuracy robot indoor localization scheme based onrobot operating system using visible light positioningrdquo IEEEPhotonics Journal vol 12 no 2 pp 1ndash16 2020

[2] J Civera A J Davison and J Montiel ldquoInverse depth pa-rametrization for monocular slamrdquo IEEE Transactions onRobotics vol 24 no 5 p 932 2008

[3] M Liu R Siegwart and D P Fact ldquoTowards topologicalmapping and scene recognition with color for omnidirec-tional camerardquo in Proceedings of the International Conferenceon Robotics and Automation (ICRA) Saint Paul MN USAMay 2012

[4] P Henry M Krainin E Herbst X Ren and D Fox ldquoRGB-Dmapping using depth cameras for dense 3D modeling ofindoor environmentsrdquo in Proceedings of the 12th Interna-tional Symposium on Experimental Robotics ISER 2010 NewDelhi India December 2010

[5] M Liu F Pomerleau F Colas and R Siegwart ldquoNormalestimation for pointcloud using GPU based sparse tensorvotingrdquo in Proceedings of the IEEE International Conferenceon Robotics and Biomimetics (ROBIO) Shenzhen ChinaDecember 2013

[6] R Miyagusuku A Yamashita and H Asama ldquoData infor-mation fusion from multiple access points for WiFi-basedself-localizationrdquo IEEE Robotics and Automation Lettersvol 4 no 2 pp 269ndash276 2019

[7] H Chen W Guan S Li and Y Wu ldquoIndoor high precisionthree-dimensional positioning system based on visible lightcommunication using modified genetic algorithmrdquo OpticsCommunications vol 413 pp 103ndash120 2018

[8] W Guan ldquoA novel three-dimensional indoor positioningalgorithm design based on visible light communicationrdquoOptics Communication vol 392 pp 282ndash293 2017

[9] S Juneja and S Vashisth ldquoIndoor positioning system usingvisible light communicationrdquo in Proceedings of the 2017 In-ternational Conference on Computing and CommunicationTechnologies for Smart Nation (IC3TSN) Gurgaon IndiaOctober 2017

[10] S Chen W Guan Z Tan et al ldquoHigh accuracy and erroranalysis of indoor visible light positioning algorithm based onimage sensorrdquo 2019 httparxivorgabs191111773

[11] L Bai Y Yang C Feng et al ldquoAn enhanced camera assistedreceived signal strength ratio algorithm for indoor visible lightpositioningrdquo in Proceedings of the 2020 IEEE InternationalConference On Communications Workshops (Icc Workshops)Dublin Ireland June 2020

[12] A H A Bakar T Glass H Y Tee F Alam and M LeggldquoAccurate visible light positioning using multiple photodiodereceiver and machine learningrdquo IEEE Transactions on In-strumentation and Measurement 2020

[13] L Li P Hu C Peng G Shen and F Zhao ldquoEpsilon a visiblelight based positioning systemrdquo in Proceedings of the 11thUSENIX Conference on Networked Systems Design andImplementation Berkeley CA USA April 2014

[14] N Rajagopal P Lazik and A Rowe ldquoVisual light landmarksfor mobile devicesrdquo in Proceedings of the 13th InternationalSymposium Information Processing in Sensor Networks BerlinGermany April 2014

[15] Y-S Kuo P Pannuto K-J Hsiao and P Dutta ldquoLuxaposeindoor positioning with mobile phones and visible lightrdquo inProceedings of the 20th Annual International Conference onHigh Performance Computing Goa India December 2014

[16] Z Yang Z Wang J Zhang C Huang and Q ZhangldquoWearables can afford light-weight indoor positioning withvisible lightrdquo in Proceedings of the 13th Annual InternationalConference on Mobile Systems Applications and ServicesFlorence Italy May 2015

[17] J Fang Z Yang S Long et al ldquoHigh-speed indoor navigationsystem based on visible light and mobile phonerdquo IEEEPhotonics Journal vol 9 no 2 pp 1ndash11 2017

[18] M S Rahman M M Haque and K D Kim ldquoHigh precisionindoor positioning using lighting LED and image sensorrdquo inProceedings of the International Conference on Computer andInformation Technology Dhaka Bangladesh December 2011

[19] Q Liang J Lin and M Liu ldquoTowards robust visible lightpositioning under LED shortage by visual-inertial fusionrdquo inProceedings of the 2019 International Conference on IndoorPositioning and Indoor Navigation (IPIN) Pisa Italy October2019

10 International Journal of Optics

[20] Q Liang and M Liu ldquoA tightly coupled VLC-inertial lo-calization system by EKFrdquo IEEE Robotics and AutomationLetters vol 5 no 2 pp 3129ndash3136 2020

[21] W Guan Research on High Precision Indoor Visible LightLocalization Algorithm Based on Image sensor Masterrsquosesis South China University of Technology GuangdongProvince China 2019

[22] C Qiu B Hussain and C P Yue ldquoBluetooth based wirelesscontrol for iBeacon and VLC enabled lightingrdquo in Proceedingsof the 2019 IEEE 8th Global Conference on Consumer Elec-tronics (GCCE) Osaka Japan October 2019

[23] B Hussain C Qiu and C P Yue ldquoSmart lighting control andservices using visible light communication and Bluetoothrdquo inProceedings of the 2019 IEEE 8th Global Conference onConsumer Electronics (GCCE) Osaka Japan October 2019

[24] W Guan Z Liu S Wen H Xie and X Zhang ldquoVisible lightdynamic positioning method using improved camshift-kal-man algorithmrdquo IEEE Photonics Journal vol 11 no 6pp 1ndash22 2019

[25] C Xie ldquoe LED-ID detection and recognition method basedon visible light positioning using proximity methodrdquo IEEEPhotonics vol 10 no 2 p 7902116 2018

[26] G Weipeng S Wen L Liu and H Zhang ldquoHigh-precisionindoor positioning algorithm based on visible light com-munication using complementary metal-ndashoxidendashsemiconductor image sensorrdquo Optical Engineeringvol 58 p 1 2019

International Journal of Optics 11

Page 3: High-Accuracy and Real-Time Indoor Positioning System ...

triggering the SPI module will start the continuous VLCtransmission without CPU intervention [22]

e basic architecture of smart LED lamps withBluetooth control is shown in Figure 5 Informationabout each lamprsquos VLC identifier (ID) and its Bluetoothaddress as well as its physical location is stored in thecloud [23] e position data is uploaded by Bluetoothand stored in the on-board data memory in the VLCcontroller en the memory generates a modulationpattern to modulate the LED lamps and broadcasts a

unique position identifier provided by each lampbreaking through the key points of indoor localizationUsers can shoot the LED light through the camera on asmartphone to obtain the unique VLC ID correspondingto the LED and then convert the received VLC ID into thecorresponding Bluetooth MAC address stored on theremote server or cloud and realize the further control ofLED lamps It also can get the actual location of the LEDwhich can be used as data for subsequent visible lightindoor positioning in the proposed system

Unique ID

Modulator

LED driver

LED luminaries

Acquisition of image

LED-ROI tracking

LED ID recognition

Triple-light positioning

Designed LED with VLC andbluetooth control functions

Mobile robot

VLC signal

Figure 1 e architecture of the VLP-based robot system

Powersource

Rectifying andstep-down

BLE

Changed VLCdata

IO port

BLE SoC

Switch

Adjust VLC datafrequency current

1 0 11 0 1 1 10

Figure 2 Block diagram of LED lamp

International Journal of Optics 3

22 Rolling Shutter Mechanism of the CMOS Image SensorBased on the modulated LED lamps the recognition of LED-ID is realized by using the image sensor-based VLP whichutilizes the rolling shutter mechanism of the CMOS imagesensor All pixels on the sensor are exposed at the same timeas the CCD sensor so at the end of each exposure the datafor all pixels is read out simultaneously is mechanism isoften referred to as the global shutter for the CCD sensorHowever for a CMOS sensor when exposure of one row is

completed the data of this row will be read out immediatelywhich means that the exposure and data reading are per-formed row by row is working mechanism is called therolling shutter mechanism of the CMOS sensor e LEDimage captured by the CMOS sensor would produce brightand dark stripes while turning the LED on and off during aperiod of exposure due to the rolling shutter mechanism ofCMOS sensor As shown in Figure 6 the LED image ac-quisition using CMOS sensor is illustrated

Analog peripherals

Digital peripherals

Timerscounters

DMA

Processor

OscillatorsPower

managementunit

AH

B bu

s

APB

bus

RF + modem+ LL privacy

GPIO

Memory(flashROMRAM)

Figure 3 Block diagram of BLE SoC

Powersource

Changed VLCdata

IO port

Switch

Adjust VLC datafrequency and current

Rectifying andstep-down

BLE SoC

BLE

1001

VLC IDVLC ID

BluetoothMAC address

ID1 Location Bluetooth MAC

ID2 Location Bluetooth MAC

IDn Location Bluetooth MAC1 0 11 0 1 1 10

Figure 5 Smart lighting system architecture

VLC data

1 0 11 0 1 1 10

RAM

Flash

DMA SPI IO

Load VLC datato the RAM once

Figure 4 VLC control signal generation

4 International Journal of Optics

23e Region of Interest (LED-ROI) Area Tracking Based onImprovedCAMshift-KalmanAlgorithm As far as we knowreal-time positioning of mobile robot requires real-timeshooting and processing of each image to obtain theregion of interest (ROI) area of the LED luminaire in theimage e success of the LED-ID detection and recog-nition method is inseparable from the accurate detectionof LED-ROI which determines the real-time perfor-mance and robustness of the system We use an improvedCAMshift-Kalman algorithm to improve the accuracyand robustness of the VLP-based system In order toobtain better tracking performance the Bhattacharyyacoefficient was used to update the observation noisematrix of Kalman filter in real time e CAMshift al-gorithm is used to track the output position of the targetas the measurement signal and Kalman filter algorithm isused to correct the target position e algorithm not onlycombines the CAMshift algorithm with the Kalman filterbut also introduces the Bhattacharyya coefficient Formore details one could refer to our previous work [24]We tested the effect of the algorithm in the dynamic casewith modulator tubes as background interference Underthe interference of LED tube this algorithm can stillensure the accurate detection of LED-ROI which reflectsrobustness and good real-time performance of the al-gorithm e dynamic tracking performance of the im-proved CAMshift-Kalman algorithm is shown inFigure 7

24 e LED-ID Recognition e LED-ID recognition inthis paper is to give certain characteristics to the light anddark stripes captured in the image By introducing fourcharacteristic variables frequency duty cycle distance and

phase difference coefficient the characteristics of LED-IDoptical strip code captured by CMOS image sensor are givene characteristic can be the number of light stripes of thestripe code in the LED pixel area the area of the LED pixelthe width of the bright stripe is greater than the width of thebright stripe and the width of the dark stripe and the phasedifference coefficient between the stripes en after theLED-ROI was obtained by the aforementioned algorithmthese features are extracted through simple image processingtechnology and the location information of LED-ID isrecognized through the preestablished database e detailscould be found in [25]

25 Triple-Light Positioning Algorithm As shown in Fig-ure 8 when three or more LEDs are detected in the vision ofthe image sensor the robot selects three LEDs for posi-tioning after information extraction through the algorithme coordinates of the LEDs are (x1 y1 z1) (x2 y2 z2) and(x3 y3 z3) Generally speaking the height of the ceiling is thesame in one place so z1 z2 z3 e position of the centerpoint O of the lens can be calculated by the similar trianglerelationship

erefore according to lens focal length f and the co-ordinates on the image plane (i j) the distance drsquo

Ok k 1 2 3between each LED image center and lens center O can becalculated

drsquoOk

f2

+ i2k + j

2k

1113969

(1)

And according to the similar triangle the distance dOk k

1 2 3 from the center O of the lens to the LED anchor can beobtained

dOk H times d

rsquoOk

f (2)

where H is the vertical distance between the LED and thelens plane

H fa

aprime f

b

bprime f

c

cprime (3)

en the coordinates of the lens center point O (x y z)can be calculated by the following formula

x minus x1( 11138572

+ y minus y1( 11138572

+(H)2

d2o1

x minus x2( 11138572

+ y minus y2( 11138572

+(H)2

d2o21113966

x minus x3( 11138572

+ y minus y3( 11138572

+(H)2

d2o3

(4)

e aforementioned formula can be converted to

x

y

⎡⎢⎢⎢⎣ ⎤⎥⎥⎥⎦ 12

x2 minus x1 y2 minus y1

x3 minus x1 y3 minus y1

⎡⎢⎢⎢⎣ ⎤⎥⎥⎥⎦

minus 1 d2o1 minus d

2o2 minus x

21 + y

21 minus x

22 minus y

221113872 1113873

d2o1 minus d

2o3 minus x

21 + y

21 minus x

23 minus y

231113872 1113873

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (5)

Row

Image

Time

Exposure time Read-out time

Frame rate

Figure 6 Rolling shutter mechanism of the CMOS image sensorand the LED image acquisition

International Journal of Optics 5

And the z coordinate is

z z1 minus H z2 minus H z3 minus H (6)

According to equations (5) and (6) we can calculate theposition of the camera lens center O Meanwhile the co-ordinates of the robot can be calculated through the staticconversion relationship between the camera and the robotbase e analysis of the algorithm could be found in ourprevious work [26]

3 Implementation of the VLP-BasedRobot System

31 System Setup and Implementation e system setup ofthe VLP-based robot system is shown in Table 1 Mobile robot(Turtlebot3) was used to build the VLP-based system plat-form e images of LEDs were shot by MindVision UB-300industrial camera which is fixed vertically on the robot byprior extrinsic calibration and transmitted by Raspberry Pi 3

Model B As shown in Figure 9 we implemented the system inthe office with three smart LED lamps installed on the ceilingfor positioning e related information of LED lamps isshown in Figure 9 In positioning process the image sensorcan detect at least three LEDs to ensure the realization of thetriple-light positioning Algorithm And for the weak pro-cessor performance of Raspberry Pi 3 Model B the programof image processing and location calculation is run on aremote controller e operating system of the Turtlebot3robot is Ubuntu 1604 MATE which corresponds to theKinetic version of ROS and the system of the remote con-troller is Ubuntu 1604 desktop We demonstrate the per-formance of the VLP robot system on ROS e camerainstalled vertically on the robot captures the image and thenobtains the location information stored by the LED imme-diately through the LED-ROI dynamic tracking algorithmand LED-ID recognition and finally realizes the centimeter-level location through the triple-light positioning algorithmPurple dots are used to represent the positioning result of thegeometric center of the camera obtained by VLP

Interference

LED-ROI obtained by thedynamic tracking algorithm

(a) (b)

(c) (d)

Figure 7 Dynamic tracking performance with modulator tube as background interference of the improved CAMshift-Kalman algorithm

6 International Journal of Optics

32 Positioning Accuracy of the VLP-Based Robot SystemIn order to evaluate the positioning accuracy of the systemtwo series of experiments were carried out e first serieswas used to test the performance of the stationary robot Asshown in Figure 10 uniformly distributed points in theexperimental area were randomly selected to calculate thestandard deviation between the measured position and the

actual position For the positioning accuracy shown inFigure 10 90 positioning errors are less than 3231 cm themaximum positioning error is no more than 5 cm and theaverage positioning error is 214 cm

We test the performance for moving mobile robot in thesecond series of experiments and we control the robot to gostraight and travel a distance after turning to demonstrate

LED1 (x1 y1 z1) LED2 (x2 y2 z2)

(x1 y1 z1)

(x3 y3 z3)

(x2 y2 z2)

LED3 (x3 y3 z3)

Camera

(i2 f2)

(i3 f3)

(i1 f1)

Image plane

H

f

a

b c

bprime

aprime

cprimeOprime

Okprime

LensO

dok

dprimeok

(i3 j3)

(i1 j1)(i2 j2)

Figure 8 e procedure of triple-light positioning

Table 1 Parameters in this paper

Camera specificationsModel MindVision UB-300Pixel (HtimesV) 2048times1536Time of exposure 002msType of shutter acquisition mode Electronic rolling shutterAcquisition mode Successive and soft trigger

Turtlebot3 robot specificationsModule Raspberry pi 3 BCPU Quad core 12 GHz broadcom BCM2837RAM 1GBOperating system Ubuntu mate 1604

Remote controller specificationsModule Acer VN7-593GCPU Quad core Intelreg Coretrade i7-7700HQOperating system Ubuntu 1604 LTS

System platform specificationsSize (LtimesWtimesH) 146times146times 285cm3

LED specificationsCoordinates of LED1 (cm) (13 159 285)Coordinates of LED2 (cm) (159 159 285)Coordinates of LED3 (cm) (159 13 285)e half-power angles of LEDdeg (ψ12) 60Current of each LED 300mARated power 18WOptical output 1500lm 6000KLED modulation frequency 5 kHzDiameter of each LED 150mm

International Journal of Optics 7

LED lamp

Laser radar

Industrial camera

Wheel

Raspbeery Pi

Open CR

Battery

Electricmachinery

TurtleBot3 withindustrial camera

(00)

x

y

LED3 5KHZ (15913)

LED2 3KHZ (159159)

LED1 1KHZ (13159)

Figure 9 VLP-based system implementation platform

80

60

40

20

0

4020

0ndash20

ndash40

200

ndash20ndash40

40

x (cm)y (cm)

z (cm

)

Real positionEstimated position

(a)

ndash40ndash40

ndash30 ndash20 ndash10 0 10 20 30 40

ndash30

ndash20

ndash10

0

10

20

30

40

y (cm

)

x (cm)

Real positionEstimated position

(b)

30

25

20

15

10

5

00 05 1 15 2 25 3 35 4 45 5

Error (cm)

e n

umbe

r of p

ositi

onin

g er

ror

(c)

1

09

08

07

06

05

04

03

02

01

00 1 2 3 4 5 6

Position Error (cm)

e c

df o

f pos

ition

erro

r (

) X 3231Y 0901

(d)

Figure 10 Stationary positioning of VLP-based robot system (a)e 3D positioning results (b)e horizontal view of the 3-D positioningresults (c) Histogram of the positioning error in the system (d) e cumulative distribution function (CDF) curves of positioning errors

8 International Journal of Optics

the real positioning effect of the localization system withdifferent speed As shown in Figure 11 the measured po-sitions of the robot are plotted as purple dots and the robotindustrial camera are highly coincident with the positioningresults in different motion states in the refined section of therobot which reflects the good positioning effect of the VLCpositioning system on ROS mobile robot Due to the re-strictions of ground field we let the robot travel at the speedof 2ndash4 cms However the moving speed of the robot has noeffect on the positioning accuracy but depends on the real-time positioning which is discussed in Section 33 It is worthmentioning that the position results in the coordinates of thecenter of the camera fixed vertically on the robot which canbe transformed to the center of the robot by coordinatetransformation in the VLP robot system

33 Real-Time Positioning Speed of the VLP-Based RobotSystem Location speed is another key factor in the VLP-based robot system which represents the mobile robotrsquosmoving speed when receiving VLC information andcalculating the current position in time erefore for theIS-based VLP system before the mobile robot passesthrough the VLP information transmitter it needs tocapture and extract the VLP information that is beforethe VLC light leaves the field of view (FOV) of the CMOSimage sensor erefore the maximum supportedmoving speed is the speed at which the mobile robotextracts the VLP information over a period of time from

the edge of the leftmost to the right We assume that thetarget LED is tangent to the left edge of the image in frameN and to the right edge of the image in frame N + 1 asshown in Figure 12 and the maximum velocity v of themobile robot is defined as v st where s is the distancethe terminal device moves between two frames and t isthe time from the beginning of the image capture to thecompletion of the positioning calculation Based on theproportional relationship between image coordinates andworld coordinates the relation is expressed as sr Ddwhere r is the pixel length of the image D is the actualdiameter of the LED and D is the diameter of the LED inthe image

As mentioned in Section 31 due to the weak pro-cessing capacity of Raspberry Pi 3 in Turtlebot3 most ofthe image process cannot be directly performed onRaspberry Pi erefore the images captured by theCMOS camera in the robot are transmitted to a remotelaptop for processing leading to positioning delays Weset the timer and counter in the program and the averagetime t is 03893 s As can be seen from Table 1 the actualdiameter D of LED is 150 mm e pixel length r of theimage is 800 (pixels) and the LED diameter d in theimage is 5554 (pixels) According to the definition ofspeed the maximum movement speed of the positioningterminal is 9497 ms vasymp 20 kmh If the robot isequipped with a small computer with better computingpower it can save the time it takes to transmit data andsupport higher moving speeds

Figure 11 e positioning effects for mobile robot moving along different tracks with different speeds of VLP-based robot system

Frame N

(a)

Frame N + 1

(b)

Figure 12 e relative position of the LED in two consecutive frames

International Journal of Optics 9

4 Conclusion

In this paper we design a high-accuracy and real-timeVLP-based robot system on ROS and implement theindoor VLP system based on TurtleBot3 robot platform inthe office as a practical application scenario which isequipped with designed LED lamps having VLC andBluetooth control function e implementation of thesystem shows that the positioning accuracy of the VLP-based system can reach cm-level position accuracy of214 cm and have a good real-time performance that cansupport the moving speed up to 20 kmh which has agreat research prospect in the autonomous positioningand navigation of indoor robots In the future we willapply the VLP-based system to the fusion localization orrobot navigation of ROS making full use of the advan-tages of VLP while making up for the defects of visiblelight positioning

Data Availability

e source code of our finished article is not open to theoutside world and the experimental data have been reflectedin the Experimental Analysis section

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Xianmin Li contributed to the conception of the studyperformed the data analysis and helped perform the analysiswith constructive discussions Zihong Yan contributedsignificantly to the analysis and manuscript preparation andwrote the manuscript Linyi Huang performed the experi-ment and analysis with constructive discussions and wrotethe manuscript Shihuan Chen helped revise the manuscriptManxi Liu helped perform the analysis with constructivediscussions Xianmin Li Zihong Yan and Linyi Huangcontributed equally to this work

References

[1] W Guan S Chen S Wen Z Tan H Song and W HouldquoHigh-accuracy robot indoor localization scheme based onrobot operating system using visible light positioningrdquo IEEEPhotonics Journal vol 12 no 2 pp 1ndash16 2020

[2] J Civera A J Davison and J Montiel ldquoInverse depth pa-rametrization for monocular slamrdquo IEEE Transactions onRobotics vol 24 no 5 p 932 2008

[3] M Liu R Siegwart and D P Fact ldquoTowards topologicalmapping and scene recognition with color for omnidirec-tional camerardquo in Proceedings of the International Conferenceon Robotics and Automation (ICRA) Saint Paul MN USAMay 2012

[4] P Henry M Krainin E Herbst X Ren and D Fox ldquoRGB-Dmapping using depth cameras for dense 3D modeling ofindoor environmentsrdquo in Proceedings of the 12th Interna-tional Symposium on Experimental Robotics ISER 2010 NewDelhi India December 2010

[5] M Liu F Pomerleau F Colas and R Siegwart ldquoNormalestimation for pointcloud using GPU based sparse tensorvotingrdquo in Proceedings of the IEEE International Conferenceon Robotics and Biomimetics (ROBIO) Shenzhen ChinaDecember 2013

[6] R Miyagusuku A Yamashita and H Asama ldquoData infor-mation fusion from multiple access points for WiFi-basedself-localizationrdquo IEEE Robotics and Automation Lettersvol 4 no 2 pp 269ndash276 2019

[7] H Chen W Guan S Li and Y Wu ldquoIndoor high precisionthree-dimensional positioning system based on visible lightcommunication using modified genetic algorithmrdquo OpticsCommunications vol 413 pp 103ndash120 2018

[8] W Guan ldquoA novel three-dimensional indoor positioningalgorithm design based on visible light communicationrdquoOptics Communication vol 392 pp 282ndash293 2017

[9] S Juneja and S Vashisth ldquoIndoor positioning system usingvisible light communicationrdquo in Proceedings of the 2017 In-ternational Conference on Computing and CommunicationTechnologies for Smart Nation (IC3TSN) Gurgaon IndiaOctober 2017

[10] S Chen W Guan Z Tan et al ldquoHigh accuracy and erroranalysis of indoor visible light positioning algorithm based onimage sensorrdquo 2019 httparxivorgabs191111773

[11] L Bai Y Yang C Feng et al ldquoAn enhanced camera assistedreceived signal strength ratio algorithm for indoor visible lightpositioningrdquo in Proceedings of the 2020 IEEE InternationalConference On Communications Workshops (Icc Workshops)Dublin Ireland June 2020

[12] A H A Bakar T Glass H Y Tee F Alam and M LeggldquoAccurate visible light positioning using multiple photodiodereceiver and machine learningrdquo IEEE Transactions on In-strumentation and Measurement 2020

[13] L Li P Hu C Peng G Shen and F Zhao ldquoEpsilon a visiblelight based positioning systemrdquo in Proceedings of the 11thUSENIX Conference on Networked Systems Design andImplementation Berkeley CA USA April 2014

[14] N Rajagopal P Lazik and A Rowe ldquoVisual light landmarksfor mobile devicesrdquo in Proceedings of the 13th InternationalSymposium Information Processing in Sensor Networks BerlinGermany April 2014

[15] Y-S Kuo P Pannuto K-J Hsiao and P Dutta ldquoLuxaposeindoor positioning with mobile phones and visible lightrdquo inProceedings of the 20th Annual International Conference onHigh Performance Computing Goa India December 2014

[16] Z Yang Z Wang J Zhang C Huang and Q ZhangldquoWearables can afford light-weight indoor positioning withvisible lightrdquo in Proceedings of the 13th Annual InternationalConference on Mobile Systems Applications and ServicesFlorence Italy May 2015

[17] J Fang Z Yang S Long et al ldquoHigh-speed indoor navigationsystem based on visible light and mobile phonerdquo IEEEPhotonics Journal vol 9 no 2 pp 1ndash11 2017

[18] M S Rahman M M Haque and K D Kim ldquoHigh precisionindoor positioning using lighting LED and image sensorrdquo inProceedings of the International Conference on Computer andInformation Technology Dhaka Bangladesh December 2011

[19] Q Liang J Lin and M Liu ldquoTowards robust visible lightpositioning under LED shortage by visual-inertial fusionrdquo inProceedings of the 2019 International Conference on IndoorPositioning and Indoor Navigation (IPIN) Pisa Italy October2019

10 International Journal of Optics

[20] Q Liang and M Liu ldquoA tightly coupled VLC-inertial lo-calization system by EKFrdquo IEEE Robotics and AutomationLetters vol 5 no 2 pp 3129ndash3136 2020

[21] W Guan Research on High Precision Indoor Visible LightLocalization Algorithm Based on Image sensor Masterrsquosesis South China University of Technology GuangdongProvince China 2019

[22] C Qiu B Hussain and C P Yue ldquoBluetooth based wirelesscontrol for iBeacon and VLC enabled lightingrdquo in Proceedingsof the 2019 IEEE 8th Global Conference on Consumer Elec-tronics (GCCE) Osaka Japan October 2019

[23] B Hussain C Qiu and C P Yue ldquoSmart lighting control andservices using visible light communication and Bluetoothrdquo inProceedings of the 2019 IEEE 8th Global Conference onConsumer Electronics (GCCE) Osaka Japan October 2019

[24] W Guan Z Liu S Wen H Xie and X Zhang ldquoVisible lightdynamic positioning method using improved camshift-kal-man algorithmrdquo IEEE Photonics Journal vol 11 no 6pp 1ndash22 2019

[25] C Xie ldquoe LED-ID detection and recognition method basedon visible light positioning using proximity methodrdquo IEEEPhotonics vol 10 no 2 p 7902116 2018

[26] G Weipeng S Wen L Liu and H Zhang ldquoHigh-precisionindoor positioning algorithm based on visible light com-munication using complementary metal-ndashoxidendashsemiconductor image sensorrdquo Optical Engineeringvol 58 p 1 2019

International Journal of Optics 11

Page 4: High-Accuracy and Real-Time Indoor Positioning System ...

22 Rolling Shutter Mechanism of the CMOS Image SensorBased on the modulated LED lamps the recognition of LED-ID is realized by using the image sensor-based VLP whichutilizes the rolling shutter mechanism of the CMOS imagesensor All pixels on the sensor are exposed at the same timeas the CCD sensor so at the end of each exposure the datafor all pixels is read out simultaneously is mechanism isoften referred to as the global shutter for the CCD sensorHowever for a CMOS sensor when exposure of one row is

completed the data of this row will be read out immediatelywhich means that the exposure and data reading are per-formed row by row is working mechanism is called therolling shutter mechanism of the CMOS sensor e LEDimage captured by the CMOS sensor would produce brightand dark stripes while turning the LED on and off during aperiod of exposure due to the rolling shutter mechanism ofCMOS sensor As shown in Figure 6 the LED image ac-quisition using CMOS sensor is illustrated

Analog peripherals

Digital peripherals

Timerscounters

DMA

Processor

OscillatorsPower

managementunit

AH

B bu

s

APB

bus

RF + modem+ LL privacy

GPIO

Memory(flashROMRAM)

Figure 3 Block diagram of BLE SoC

Powersource

Changed VLCdata

IO port

Switch

Adjust VLC datafrequency and current

Rectifying andstep-down

BLE SoC

BLE

1001

VLC IDVLC ID

BluetoothMAC address

ID1 Location Bluetooth MAC

ID2 Location Bluetooth MAC

IDn Location Bluetooth MAC1 0 11 0 1 1 10

Figure 5 Smart lighting system architecture

VLC data

1 0 11 0 1 1 10

RAM

Flash

DMA SPI IO

Load VLC datato the RAM once

Figure 4 VLC control signal generation

4 International Journal of Optics

23e Region of Interest (LED-ROI) Area Tracking Based onImprovedCAMshift-KalmanAlgorithm As far as we knowreal-time positioning of mobile robot requires real-timeshooting and processing of each image to obtain theregion of interest (ROI) area of the LED luminaire in theimage e success of the LED-ID detection and recog-nition method is inseparable from the accurate detectionof LED-ROI which determines the real-time perfor-mance and robustness of the system We use an improvedCAMshift-Kalman algorithm to improve the accuracyand robustness of the VLP-based system In order toobtain better tracking performance the Bhattacharyyacoefficient was used to update the observation noisematrix of Kalman filter in real time e CAMshift al-gorithm is used to track the output position of the targetas the measurement signal and Kalman filter algorithm isused to correct the target position e algorithm not onlycombines the CAMshift algorithm with the Kalman filterbut also introduces the Bhattacharyya coefficient Formore details one could refer to our previous work [24]We tested the effect of the algorithm in the dynamic casewith modulator tubes as background interference Underthe interference of LED tube this algorithm can stillensure the accurate detection of LED-ROI which reflectsrobustness and good real-time performance of the al-gorithm e dynamic tracking performance of the im-proved CAMshift-Kalman algorithm is shown inFigure 7

24 e LED-ID Recognition e LED-ID recognition inthis paper is to give certain characteristics to the light anddark stripes captured in the image By introducing fourcharacteristic variables frequency duty cycle distance and

phase difference coefficient the characteristics of LED-IDoptical strip code captured by CMOS image sensor are givene characteristic can be the number of light stripes of thestripe code in the LED pixel area the area of the LED pixelthe width of the bright stripe is greater than the width of thebright stripe and the width of the dark stripe and the phasedifference coefficient between the stripes en after theLED-ROI was obtained by the aforementioned algorithmthese features are extracted through simple image processingtechnology and the location information of LED-ID isrecognized through the preestablished database e detailscould be found in [25]

25 Triple-Light Positioning Algorithm As shown in Fig-ure 8 when three or more LEDs are detected in the vision ofthe image sensor the robot selects three LEDs for posi-tioning after information extraction through the algorithme coordinates of the LEDs are (x1 y1 z1) (x2 y2 z2) and(x3 y3 z3) Generally speaking the height of the ceiling is thesame in one place so z1 z2 z3 e position of the centerpoint O of the lens can be calculated by the similar trianglerelationship

erefore according to lens focal length f and the co-ordinates on the image plane (i j) the distance drsquo

Ok k 1 2 3between each LED image center and lens center O can becalculated

drsquoOk

f2

+ i2k + j

2k

1113969

(1)

And according to the similar triangle the distance dOk k

1 2 3 from the center O of the lens to the LED anchor can beobtained

dOk H times d

rsquoOk

f (2)

where H is the vertical distance between the LED and thelens plane

H fa

aprime f

b

bprime f

c

cprime (3)

en the coordinates of the lens center point O (x y z)can be calculated by the following formula

x minus x1( 11138572

+ y minus y1( 11138572

+(H)2

d2o1

x minus x2( 11138572

+ y minus y2( 11138572

+(H)2

d2o21113966

x minus x3( 11138572

+ y minus y3( 11138572

+(H)2

d2o3

(4)

e aforementioned formula can be converted to

x

y

⎡⎢⎢⎢⎣ ⎤⎥⎥⎥⎦ 12

x2 minus x1 y2 minus y1

x3 minus x1 y3 minus y1

⎡⎢⎢⎢⎣ ⎤⎥⎥⎥⎦

minus 1 d2o1 minus d

2o2 minus x

21 + y

21 minus x

22 minus y

221113872 1113873

d2o1 minus d

2o3 minus x

21 + y

21 minus x

23 minus y

231113872 1113873

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (5)

Row

Image

Time

Exposure time Read-out time

Frame rate

Figure 6 Rolling shutter mechanism of the CMOS image sensorand the LED image acquisition

International Journal of Optics 5

And the z coordinate is

z z1 minus H z2 minus H z3 minus H (6)

According to equations (5) and (6) we can calculate theposition of the camera lens center O Meanwhile the co-ordinates of the robot can be calculated through the staticconversion relationship between the camera and the robotbase e analysis of the algorithm could be found in ourprevious work [26]

3 Implementation of the VLP-BasedRobot System

31 System Setup and Implementation e system setup ofthe VLP-based robot system is shown in Table 1 Mobile robot(Turtlebot3) was used to build the VLP-based system plat-form e images of LEDs were shot by MindVision UB-300industrial camera which is fixed vertically on the robot byprior extrinsic calibration and transmitted by Raspberry Pi 3

Model B As shown in Figure 9 we implemented the system inthe office with three smart LED lamps installed on the ceilingfor positioning e related information of LED lamps isshown in Figure 9 In positioning process the image sensorcan detect at least three LEDs to ensure the realization of thetriple-light positioning Algorithm And for the weak pro-cessor performance of Raspberry Pi 3 Model B the programof image processing and location calculation is run on aremote controller e operating system of the Turtlebot3robot is Ubuntu 1604 MATE which corresponds to theKinetic version of ROS and the system of the remote con-troller is Ubuntu 1604 desktop We demonstrate the per-formance of the VLP robot system on ROS e camerainstalled vertically on the robot captures the image and thenobtains the location information stored by the LED imme-diately through the LED-ROI dynamic tracking algorithmand LED-ID recognition and finally realizes the centimeter-level location through the triple-light positioning algorithmPurple dots are used to represent the positioning result of thegeometric center of the camera obtained by VLP

Interference

LED-ROI obtained by thedynamic tracking algorithm

(a) (b)

(c) (d)

Figure 7 Dynamic tracking performance with modulator tube as background interference of the improved CAMshift-Kalman algorithm

6 International Journal of Optics

32 Positioning Accuracy of the VLP-Based Robot SystemIn order to evaluate the positioning accuracy of the systemtwo series of experiments were carried out e first serieswas used to test the performance of the stationary robot Asshown in Figure 10 uniformly distributed points in theexperimental area were randomly selected to calculate thestandard deviation between the measured position and the

actual position For the positioning accuracy shown inFigure 10 90 positioning errors are less than 3231 cm themaximum positioning error is no more than 5 cm and theaverage positioning error is 214 cm

We test the performance for moving mobile robot in thesecond series of experiments and we control the robot to gostraight and travel a distance after turning to demonstrate

LED1 (x1 y1 z1) LED2 (x2 y2 z2)

(x1 y1 z1)

(x3 y3 z3)

(x2 y2 z2)

LED3 (x3 y3 z3)

Camera

(i2 f2)

(i3 f3)

(i1 f1)

Image plane

H

f

a

b c

bprime

aprime

cprimeOprime

Okprime

LensO

dok

dprimeok

(i3 j3)

(i1 j1)(i2 j2)

Figure 8 e procedure of triple-light positioning

Table 1 Parameters in this paper

Camera specificationsModel MindVision UB-300Pixel (HtimesV) 2048times1536Time of exposure 002msType of shutter acquisition mode Electronic rolling shutterAcquisition mode Successive and soft trigger

Turtlebot3 robot specificationsModule Raspberry pi 3 BCPU Quad core 12 GHz broadcom BCM2837RAM 1GBOperating system Ubuntu mate 1604

Remote controller specificationsModule Acer VN7-593GCPU Quad core Intelreg Coretrade i7-7700HQOperating system Ubuntu 1604 LTS

System platform specificationsSize (LtimesWtimesH) 146times146times 285cm3

LED specificationsCoordinates of LED1 (cm) (13 159 285)Coordinates of LED2 (cm) (159 159 285)Coordinates of LED3 (cm) (159 13 285)e half-power angles of LEDdeg (ψ12) 60Current of each LED 300mARated power 18WOptical output 1500lm 6000KLED modulation frequency 5 kHzDiameter of each LED 150mm

International Journal of Optics 7

LED lamp

Laser radar

Industrial camera

Wheel

Raspbeery Pi

Open CR

Battery

Electricmachinery

TurtleBot3 withindustrial camera

(00)

x

y

LED3 5KHZ (15913)

LED2 3KHZ (159159)

LED1 1KHZ (13159)

Figure 9 VLP-based system implementation platform

80

60

40

20

0

4020

0ndash20

ndash40

200

ndash20ndash40

40

x (cm)y (cm)

z (cm

)

Real positionEstimated position

(a)

ndash40ndash40

ndash30 ndash20 ndash10 0 10 20 30 40

ndash30

ndash20

ndash10

0

10

20

30

40

y (cm

)

x (cm)

Real positionEstimated position

(b)

30

25

20

15

10

5

00 05 1 15 2 25 3 35 4 45 5

Error (cm)

e n

umbe

r of p

ositi

onin

g er

ror

(c)

1

09

08

07

06

05

04

03

02

01

00 1 2 3 4 5 6

Position Error (cm)

e c

df o

f pos

ition

erro

r (

) X 3231Y 0901

(d)

Figure 10 Stationary positioning of VLP-based robot system (a)e 3D positioning results (b)e horizontal view of the 3-D positioningresults (c) Histogram of the positioning error in the system (d) e cumulative distribution function (CDF) curves of positioning errors

8 International Journal of Optics

the real positioning effect of the localization system withdifferent speed As shown in Figure 11 the measured po-sitions of the robot are plotted as purple dots and the robotindustrial camera are highly coincident with the positioningresults in different motion states in the refined section of therobot which reflects the good positioning effect of the VLCpositioning system on ROS mobile robot Due to the re-strictions of ground field we let the robot travel at the speedof 2ndash4 cms However the moving speed of the robot has noeffect on the positioning accuracy but depends on the real-time positioning which is discussed in Section 33 It is worthmentioning that the position results in the coordinates of thecenter of the camera fixed vertically on the robot which canbe transformed to the center of the robot by coordinatetransformation in the VLP robot system

33 Real-Time Positioning Speed of the VLP-Based RobotSystem Location speed is another key factor in the VLP-based robot system which represents the mobile robotrsquosmoving speed when receiving VLC information andcalculating the current position in time erefore for theIS-based VLP system before the mobile robot passesthrough the VLP information transmitter it needs tocapture and extract the VLP information that is beforethe VLC light leaves the field of view (FOV) of the CMOSimage sensor erefore the maximum supportedmoving speed is the speed at which the mobile robotextracts the VLP information over a period of time from

the edge of the leftmost to the right We assume that thetarget LED is tangent to the left edge of the image in frameN and to the right edge of the image in frame N + 1 asshown in Figure 12 and the maximum velocity v of themobile robot is defined as v st where s is the distancethe terminal device moves between two frames and t isthe time from the beginning of the image capture to thecompletion of the positioning calculation Based on theproportional relationship between image coordinates andworld coordinates the relation is expressed as sr Ddwhere r is the pixel length of the image D is the actualdiameter of the LED and D is the diameter of the LED inthe image

As mentioned in Section 31 due to the weak pro-cessing capacity of Raspberry Pi 3 in Turtlebot3 most ofthe image process cannot be directly performed onRaspberry Pi erefore the images captured by theCMOS camera in the robot are transmitted to a remotelaptop for processing leading to positioning delays Weset the timer and counter in the program and the averagetime t is 03893 s As can be seen from Table 1 the actualdiameter D of LED is 150 mm e pixel length r of theimage is 800 (pixels) and the LED diameter d in theimage is 5554 (pixels) According to the definition ofspeed the maximum movement speed of the positioningterminal is 9497 ms vasymp 20 kmh If the robot isequipped with a small computer with better computingpower it can save the time it takes to transmit data andsupport higher moving speeds

Figure 11 e positioning effects for mobile robot moving along different tracks with different speeds of VLP-based robot system

Frame N

(a)

Frame N + 1

(b)

Figure 12 e relative position of the LED in two consecutive frames

International Journal of Optics 9

4 Conclusion

In this paper we design a high-accuracy and real-timeVLP-based robot system on ROS and implement theindoor VLP system based on TurtleBot3 robot platform inthe office as a practical application scenario which isequipped with designed LED lamps having VLC andBluetooth control function e implementation of thesystem shows that the positioning accuracy of the VLP-based system can reach cm-level position accuracy of214 cm and have a good real-time performance that cansupport the moving speed up to 20 kmh which has agreat research prospect in the autonomous positioningand navigation of indoor robots In the future we willapply the VLP-based system to the fusion localization orrobot navigation of ROS making full use of the advan-tages of VLP while making up for the defects of visiblelight positioning

Data Availability

e source code of our finished article is not open to theoutside world and the experimental data have been reflectedin the Experimental Analysis section

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Xianmin Li contributed to the conception of the studyperformed the data analysis and helped perform the analysiswith constructive discussions Zihong Yan contributedsignificantly to the analysis and manuscript preparation andwrote the manuscript Linyi Huang performed the experi-ment and analysis with constructive discussions and wrotethe manuscript Shihuan Chen helped revise the manuscriptManxi Liu helped perform the analysis with constructivediscussions Xianmin Li Zihong Yan and Linyi Huangcontributed equally to this work

References

[1] W Guan S Chen S Wen Z Tan H Song and W HouldquoHigh-accuracy robot indoor localization scheme based onrobot operating system using visible light positioningrdquo IEEEPhotonics Journal vol 12 no 2 pp 1ndash16 2020

[2] J Civera A J Davison and J Montiel ldquoInverse depth pa-rametrization for monocular slamrdquo IEEE Transactions onRobotics vol 24 no 5 p 932 2008

[3] M Liu R Siegwart and D P Fact ldquoTowards topologicalmapping and scene recognition with color for omnidirec-tional camerardquo in Proceedings of the International Conferenceon Robotics and Automation (ICRA) Saint Paul MN USAMay 2012

[4] P Henry M Krainin E Herbst X Ren and D Fox ldquoRGB-Dmapping using depth cameras for dense 3D modeling ofindoor environmentsrdquo in Proceedings of the 12th Interna-tional Symposium on Experimental Robotics ISER 2010 NewDelhi India December 2010

[5] M Liu F Pomerleau F Colas and R Siegwart ldquoNormalestimation for pointcloud using GPU based sparse tensorvotingrdquo in Proceedings of the IEEE International Conferenceon Robotics and Biomimetics (ROBIO) Shenzhen ChinaDecember 2013

[6] R Miyagusuku A Yamashita and H Asama ldquoData infor-mation fusion from multiple access points for WiFi-basedself-localizationrdquo IEEE Robotics and Automation Lettersvol 4 no 2 pp 269ndash276 2019

[7] H Chen W Guan S Li and Y Wu ldquoIndoor high precisionthree-dimensional positioning system based on visible lightcommunication using modified genetic algorithmrdquo OpticsCommunications vol 413 pp 103ndash120 2018

[8] W Guan ldquoA novel three-dimensional indoor positioningalgorithm design based on visible light communicationrdquoOptics Communication vol 392 pp 282ndash293 2017

[9] S Juneja and S Vashisth ldquoIndoor positioning system usingvisible light communicationrdquo in Proceedings of the 2017 In-ternational Conference on Computing and CommunicationTechnologies for Smart Nation (IC3TSN) Gurgaon IndiaOctober 2017

[10] S Chen W Guan Z Tan et al ldquoHigh accuracy and erroranalysis of indoor visible light positioning algorithm based onimage sensorrdquo 2019 httparxivorgabs191111773

[11] L Bai Y Yang C Feng et al ldquoAn enhanced camera assistedreceived signal strength ratio algorithm for indoor visible lightpositioningrdquo in Proceedings of the 2020 IEEE InternationalConference On Communications Workshops (Icc Workshops)Dublin Ireland June 2020

[12] A H A Bakar T Glass H Y Tee F Alam and M LeggldquoAccurate visible light positioning using multiple photodiodereceiver and machine learningrdquo IEEE Transactions on In-strumentation and Measurement 2020

[13] L Li P Hu C Peng G Shen and F Zhao ldquoEpsilon a visiblelight based positioning systemrdquo in Proceedings of the 11thUSENIX Conference on Networked Systems Design andImplementation Berkeley CA USA April 2014

[14] N Rajagopal P Lazik and A Rowe ldquoVisual light landmarksfor mobile devicesrdquo in Proceedings of the 13th InternationalSymposium Information Processing in Sensor Networks BerlinGermany April 2014

[15] Y-S Kuo P Pannuto K-J Hsiao and P Dutta ldquoLuxaposeindoor positioning with mobile phones and visible lightrdquo inProceedings of the 20th Annual International Conference onHigh Performance Computing Goa India December 2014

[16] Z Yang Z Wang J Zhang C Huang and Q ZhangldquoWearables can afford light-weight indoor positioning withvisible lightrdquo in Proceedings of the 13th Annual InternationalConference on Mobile Systems Applications and ServicesFlorence Italy May 2015

[17] J Fang Z Yang S Long et al ldquoHigh-speed indoor navigationsystem based on visible light and mobile phonerdquo IEEEPhotonics Journal vol 9 no 2 pp 1ndash11 2017

[18] M S Rahman M M Haque and K D Kim ldquoHigh precisionindoor positioning using lighting LED and image sensorrdquo inProceedings of the International Conference on Computer andInformation Technology Dhaka Bangladesh December 2011

[19] Q Liang J Lin and M Liu ldquoTowards robust visible lightpositioning under LED shortage by visual-inertial fusionrdquo inProceedings of the 2019 International Conference on IndoorPositioning and Indoor Navigation (IPIN) Pisa Italy October2019

10 International Journal of Optics

[20] Q Liang and M Liu ldquoA tightly coupled VLC-inertial lo-calization system by EKFrdquo IEEE Robotics and AutomationLetters vol 5 no 2 pp 3129ndash3136 2020

[21] W Guan Research on High Precision Indoor Visible LightLocalization Algorithm Based on Image sensor Masterrsquosesis South China University of Technology GuangdongProvince China 2019

[22] C Qiu B Hussain and C P Yue ldquoBluetooth based wirelesscontrol for iBeacon and VLC enabled lightingrdquo in Proceedingsof the 2019 IEEE 8th Global Conference on Consumer Elec-tronics (GCCE) Osaka Japan October 2019

[23] B Hussain C Qiu and C P Yue ldquoSmart lighting control andservices using visible light communication and Bluetoothrdquo inProceedings of the 2019 IEEE 8th Global Conference onConsumer Electronics (GCCE) Osaka Japan October 2019

[24] W Guan Z Liu S Wen H Xie and X Zhang ldquoVisible lightdynamic positioning method using improved camshift-kal-man algorithmrdquo IEEE Photonics Journal vol 11 no 6pp 1ndash22 2019

[25] C Xie ldquoe LED-ID detection and recognition method basedon visible light positioning using proximity methodrdquo IEEEPhotonics vol 10 no 2 p 7902116 2018

[26] G Weipeng S Wen L Liu and H Zhang ldquoHigh-precisionindoor positioning algorithm based on visible light com-munication using complementary metal-ndashoxidendashsemiconductor image sensorrdquo Optical Engineeringvol 58 p 1 2019

International Journal of Optics 11

Page 5: High-Accuracy and Real-Time Indoor Positioning System ...

23e Region of Interest (LED-ROI) Area Tracking Based onImprovedCAMshift-KalmanAlgorithm As far as we knowreal-time positioning of mobile robot requires real-timeshooting and processing of each image to obtain theregion of interest (ROI) area of the LED luminaire in theimage e success of the LED-ID detection and recog-nition method is inseparable from the accurate detectionof LED-ROI which determines the real-time perfor-mance and robustness of the system We use an improvedCAMshift-Kalman algorithm to improve the accuracyand robustness of the VLP-based system In order toobtain better tracking performance the Bhattacharyyacoefficient was used to update the observation noisematrix of Kalman filter in real time e CAMshift al-gorithm is used to track the output position of the targetas the measurement signal and Kalman filter algorithm isused to correct the target position e algorithm not onlycombines the CAMshift algorithm with the Kalman filterbut also introduces the Bhattacharyya coefficient Formore details one could refer to our previous work [24]We tested the effect of the algorithm in the dynamic casewith modulator tubes as background interference Underthe interference of LED tube this algorithm can stillensure the accurate detection of LED-ROI which reflectsrobustness and good real-time performance of the al-gorithm e dynamic tracking performance of the im-proved CAMshift-Kalman algorithm is shown inFigure 7

24 e LED-ID Recognition e LED-ID recognition inthis paper is to give certain characteristics to the light anddark stripes captured in the image By introducing fourcharacteristic variables frequency duty cycle distance and

phase difference coefficient the characteristics of LED-IDoptical strip code captured by CMOS image sensor are givene characteristic can be the number of light stripes of thestripe code in the LED pixel area the area of the LED pixelthe width of the bright stripe is greater than the width of thebright stripe and the width of the dark stripe and the phasedifference coefficient between the stripes en after theLED-ROI was obtained by the aforementioned algorithmthese features are extracted through simple image processingtechnology and the location information of LED-ID isrecognized through the preestablished database e detailscould be found in [25]

25 Triple-Light Positioning Algorithm As shown in Fig-ure 8 when three or more LEDs are detected in the vision ofthe image sensor the robot selects three LEDs for posi-tioning after information extraction through the algorithme coordinates of the LEDs are (x1 y1 z1) (x2 y2 z2) and(x3 y3 z3) Generally speaking the height of the ceiling is thesame in one place so z1 z2 z3 e position of the centerpoint O of the lens can be calculated by the similar trianglerelationship

erefore according to lens focal length f and the co-ordinates on the image plane (i j) the distance drsquo

Ok k 1 2 3between each LED image center and lens center O can becalculated

drsquoOk

f2

+ i2k + j

2k

1113969

(1)

And according to the similar triangle the distance dOk k

1 2 3 from the center O of the lens to the LED anchor can beobtained

dOk H times d

rsquoOk

f (2)

where H is the vertical distance between the LED and thelens plane

H fa

aprime f

b

bprime f

c

cprime (3)

en the coordinates of the lens center point O (x y z)can be calculated by the following formula

x minus x1( 11138572

+ y minus y1( 11138572

+(H)2

d2o1

x minus x2( 11138572

+ y minus y2( 11138572

+(H)2

d2o21113966

x minus x3( 11138572

+ y minus y3( 11138572

+(H)2

d2o3

(4)

e aforementioned formula can be converted to

x

y

⎡⎢⎢⎢⎣ ⎤⎥⎥⎥⎦ 12

x2 minus x1 y2 minus y1

x3 minus x1 y3 minus y1

⎡⎢⎢⎢⎣ ⎤⎥⎥⎥⎦

minus 1 d2o1 minus d

2o2 minus x

21 + y

21 minus x

22 minus y

221113872 1113873

d2o1 minus d

2o3 minus x

21 + y

21 minus x

23 minus y

231113872 1113873

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (5)

Row

Image

Time

Exposure time Read-out time

Frame rate

Figure 6 Rolling shutter mechanism of the CMOS image sensorand the LED image acquisition

International Journal of Optics 5

And the z coordinate is

z z1 minus H z2 minus H z3 minus H (6)

According to equations (5) and (6) we can calculate theposition of the camera lens center O Meanwhile the co-ordinates of the robot can be calculated through the staticconversion relationship between the camera and the robotbase e analysis of the algorithm could be found in ourprevious work [26]

3 Implementation of the VLP-BasedRobot System

31 System Setup and Implementation e system setup ofthe VLP-based robot system is shown in Table 1 Mobile robot(Turtlebot3) was used to build the VLP-based system plat-form e images of LEDs were shot by MindVision UB-300industrial camera which is fixed vertically on the robot byprior extrinsic calibration and transmitted by Raspberry Pi 3

Model B As shown in Figure 9 we implemented the system inthe office with three smart LED lamps installed on the ceilingfor positioning e related information of LED lamps isshown in Figure 9 In positioning process the image sensorcan detect at least three LEDs to ensure the realization of thetriple-light positioning Algorithm And for the weak pro-cessor performance of Raspberry Pi 3 Model B the programof image processing and location calculation is run on aremote controller e operating system of the Turtlebot3robot is Ubuntu 1604 MATE which corresponds to theKinetic version of ROS and the system of the remote con-troller is Ubuntu 1604 desktop We demonstrate the per-formance of the VLP robot system on ROS e camerainstalled vertically on the robot captures the image and thenobtains the location information stored by the LED imme-diately through the LED-ROI dynamic tracking algorithmand LED-ID recognition and finally realizes the centimeter-level location through the triple-light positioning algorithmPurple dots are used to represent the positioning result of thegeometric center of the camera obtained by VLP

Interference

LED-ROI obtained by thedynamic tracking algorithm

(a) (b)

(c) (d)

Figure 7 Dynamic tracking performance with modulator tube as background interference of the improved CAMshift-Kalman algorithm

6 International Journal of Optics

32 Positioning Accuracy of the VLP-Based Robot SystemIn order to evaluate the positioning accuracy of the systemtwo series of experiments were carried out e first serieswas used to test the performance of the stationary robot Asshown in Figure 10 uniformly distributed points in theexperimental area were randomly selected to calculate thestandard deviation between the measured position and the

actual position For the positioning accuracy shown inFigure 10 90 positioning errors are less than 3231 cm themaximum positioning error is no more than 5 cm and theaverage positioning error is 214 cm

We test the performance for moving mobile robot in thesecond series of experiments and we control the robot to gostraight and travel a distance after turning to demonstrate

LED1 (x1 y1 z1) LED2 (x2 y2 z2)

(x1 y1 z1)

(x3 y3 z3)

(x2 y2 z2)

LED3 (x3 y3 z3)

Camera

(i2 f2)

(i3 f3)

(i1 f1)

Image plane

H

f

a

b c

bprime

aprime

cprimeOprime

Okprime

LensO

dok

dprimeok

(i3 j3)

(i1 j1)(i2 j2)

Figure 8 e procedure of triple-light positioning

Table 1 Parameters in this paper

Camera specificationsModel MindVision UB-300Pixel (HtimesV) 2048times1536Time of exposure 002msType of shutter acquisition mode Electronic rolling shutterAcquisition mode Successive and soft trigger

Turtlebot3 robot specificationsModule Raspberry pi 3 BCPU Quad core 12 GHz broadcom BCM2837RAM 1GBOperating system Ubuntu mate 1604

Remote controller specificationsModule Acer VN7-593GCPU Quad core Intelreg Coretrade i7-7700HQOperating system Ubuntu 1604 LTS

System platform specificationsSize (LtimesWtimesH) 146times146times 285cm3

LED specificationsCoordinates of LED1 (cm) (13 159 285)Coordinates of LED2 (cm) (159 159 285)Coordinates of LED3 (cm) (159 13 285)e half-power angles of LEDdeg (ψ12) 60Current of each LED 300mARated power 18WOptical output 1500lm 6000KLED modulation frequency 5 kHzDiameter of each LED 150mm

International Journal of Optics 7

LED lamp

Laser radar

Industrial camera

Wheel

Raspbeery Pi

Open CR

Battery

Electricmachinery

TurtleBot3 withindustrial camera

(00)

x

y

LED3 5KHZ (15913)

LED2 3KHZ (159159)

LED1 1KHZ (13159)

Figure 9 VLP-based system implementation platform

80

60

40

20

0

4020

0ndash20

ndash40

200

ndash20ndash40

40

x (cm)y (cm)

z (cm

)

Real positionEstimated position

(a)

ndash40ndash40

ndash30 ndash20 ndash10 0 10 20 30 40

ndash30

ndash20

ndash10

0

10

20

30

40

y (cm

)

x (cm)

Real positionEstimated position

(b)

30

25

20

15

10

5

00 05 1 15 2 25 3 35 4 45 5

Error (cm)

e n

umbe

r of p

ositi

onin

g er

ror

(c)

1

09

08

07

06

05

04

03

02

01

00 1 2 3 4 5 6

Position Error (cm)

e c

df o

f pos

ition

erro

r (

) X 3231Y 0901

(d)

Figure 10 Stationary positioning of VLP-based robot system (a)e 3D positioning results (b)e horizontal view of the 3-D positioningresults (c) Histogram of the positioning error in the system (d) e cumulative distribution function (CDF) curves of positioning errors

8 International Journal of Optics

the real positioning effect of the localization system withdifferent speed As shown in Figure 11 the measured po-sitions of the robot are plotted as purple dots and the robotindustrial camera are highly coincident with the positioningresults in different motion states in the refined section of therobot which reflects the good positioning effect of the VLCpositioning system on ROS mobile robot Due to the re-strictions of ground field we let the robot travel at the speedof 2ndash4 cms However the moving speed of the robot has noeffect on the positioning accuracy but depends on the real-time positioning which is discussed in Section 33 It is worthmentioning that the position results in the coordinates of thecenter of the camera fixed vertically on the robot which canbe transformed to the center of the robot by coordinatetransformation in the VLP robot system

33 Real-Time Positioning Speed of the VLP-Based RobotSystem Location speed is another key factor in the VLP-based robot system which represents the mobile robotrsquosmoving speed when receiving VLC information andcalculating the current position in time erefore for theIS-based VLP system before the mobile robot passesthrough the VLP information transmitter it needs tocapture and extract the VLP information that is beforethe VLC light leaves the field of view (FOV) of the CMOSimage sensor erefore the maximum supportedmoving speed is the speed at which the mobile robotextracts the VLP information over a period of time from

the edge of the leftmost to the right We assume that thetarget LED is tangent to the left edge of the image in frameN and to the right edge of the image in frame N + 1 asshown in Figure 12 and the maximum velocity v of themobile robot is defined as v st where s is the distancethe terminal device moves between two frames and t isthe time from the beginning of the image capture to thecompletion of the positioning calculation Based on theproportional relationship between image coordinates andworld coordinates the relation is expressed as sr Ddwhere r is the pixel length of the image D is the actualdiameter of the LED and D is the diameter of the LED inthe image

As mentioned in Section 31 due to the weak pro-cessing capacity of Raspberry Pi 3 in Turtlebot3 most ofthe image process cannot be directly performed onRaspberry Pi erefore the images captured by theCMOS camera in the robot are transmitted to a remotelaptop for processing leading to positioning delays Weset the timer and counter in the program and the averagetime t is 03893 s As can be seen from Table 1 the actualdiameter D of LED is 150 mm e pixel length r of theimage is 800 (pixels) and the LED diameter d in theimage is 5554 (pixels) According to the definition ofspeed the maximum movement speed of the positioningterminal is 9497 ms vasymp 20 kmh If the robot isequipped with a small computer with better computingpower it can save the time it takes to transmit data andsupport higher moving speeds

Figure 11 e positioning effects for mobile robot moving along different tracks with different speeds of VLP-based robot system

Frame N

(a)

Frame N + 1

(b)

Figure 12 e relative position of the LED in two consecutive frames

International Journal of Optics 9

4 Conclusion

In this paper we design a high-accuracy and real-timeVLP-based robot system on ROS and implement theindoor VLP system based on TurtleBot3 robot platform inthe office as a practical application scenario which isequipped with designed LED lamps having VLC andBluetooth control function e implementation of thesystem shows that the positioning accuracy of the VLP-based system can reach cm-level position accuracy of214 cm and have a good real-time performance that cansupport the moving speed up to 20 kmh which has agreat research prospect in the autonomous positioningand navigation of indoor robots In the future we willapply the VLP-based system to the fusion localization orrobot navigation of ROS making full use of the advan-tages of VLP while making up for the defects of visiblelight positioning

Data Availability

e source code of our finished article is not open to theoutside world and the experimental data have been reflectedin the Experimental Analysis section

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Xianmin Li contributed to the conception of the studyperformed the data analysis and helped perform the analysiswith constructive discussions Zihong Yan contributedsignificantly to the analysis and manuscript preparation andwrote the manuscript Linyi Huang performed the experi-ment and analysis with constructive discussions and wrotethe manuscript Shihuan Chen helped revise the manuscriptManxi Liu helped perform the analysis with constructivediscussions Xianmin Li Zihong Yan and Linyi Huangcontributed equally to this work

References

[1] W Guan S Chen S Wen Z Tan H Song and W HouldquoHigh-accuracy robot indoor localization scheme based onrobot operating system using visible light positioningrdquo IEEEPhotonics Journal vol 12 no 2 pp 1ndash16 2020

[2] J Civera A J Davison and J Montiel ldquoInverse depth pa-rametrization for monocular slamrdquo IEEE Transactions onRobotics vol 24 no 5 p 932 2008

[3] M Liu R Siegwart and D P Fact ldquoTowards topologicalmapping and scene recognition with color for omnidirec-tional camerardquo in Proceedings of the International Conferenceon Robotics and Automation (ICRA) Saint Paul MN USAMay 2012

[4] P Henry M Krainin E Herbst X Ren and D Fox ldquoRGB-Dmapping using depth cameras for dense 3D modeling ofindoor environmentsrdquo in Proceedings of the 12th Interna-tional Symposium on Experimental Robotics ISER 2010 NewDelhi India December 2010

[5] M Liu F Pomerleau F Colas and R Siegwart ldquoNormalestimation for pointcloud using GPU based sparse tensorvotingrdquo in Proceedings of the IEEE International Conferenceon Robotics and Biomimetics (ROBIO) Shenzhen ChinaDecember 2013

[6] R Miyagusuku A Yamashita and H Asama ldquoData infor-mation fusion from multiple access points for WiFi-basedself-localizationrdquo IEEE Robotics and Automation Lettersvol 4 no 2 pp 269ndash276 2019

[7] H Chen W Guan S Li and Y Wu ldquoIndoor high precisionthree-dimensional positioning system based on visible lightcommunication using modified genetic algorithmrdquo OpticsCommunications vol 413 pp 103ndash120 2018

[8] W Guan ldquoA novel three-dimensional indoor positioningalgorithm design based on visible light communicationrdquoOptics Communication vol 392 pp 282ndash293 2017

[9] S Juneja and S Vashisth ldquoIndoor positioning system usingvisible light communicationrdquo in Proceedings of the 2017 In-ternational Conference on Computing and CommunicationTechnologies for Smart Nation (IC3TSN) Gurgaon IndiaOctober 2017

[10] S Chen W Guan Z Tan et al ldquoHigh accuracy and erroranalysis of indoor visible light positioning algorithm based onimage sensorrdquo 2019 httparxivorgabs191111773

[11] L Bai Y Yang C Feng et al ldquoAn enhanced camera assistedreceived signal strength ratio algorithm for indoor visible lightpositioningrdquo in Proceedings of the 2020 IEEE InternationalConference On Communications Workshops (Icc Workshops)Dublin Ireland June 2020

[12] A H A Bakar T Glass H Y Tee F Alam and M LeggldquoAccurate visible light positioning using multiple photodiodereceiver and machine learningrdquo IEEE Transactions on In-strumentation and Measurement 2020

[13] L Li P Hu C Peng G Shen and F Zhao ldquoEpsilon a visiblelight based positioning systemrdquo in Proceedings of the 11thUSENIX Conference on Networked Systems Design andImplementation Berkeley CA USA April 2014

[14] N Rajagopal P Lazik and A Rowe ldquoVisual light landmarksfor mobile devicesrdquo in Proceedings of the 13th InternationalSymposium Information Processing in Sensor Networks BerlinGermany April 2014

[15] Y-S Kuo P Pannuto K-J Hsiao and P Dutta ldquoLuxaposeindoor positioning with mobile phones and visible lightrdquo inProceedings of the 20th Annual International Conference onHigh Performance Computing Goa India December 2014

[16] Z Yang Z Wang J Zhang C Huang and Q ZhangldquoWearables can afford light-weight indoor positioning withvisible lightrdquo in Proceedings of the 13th Annual InternationalConference on Mobile Systems Applications and ServicesFlorence Italy May 2015

[17] J Fang Z Yang S Long et al ldquoHigh-speed indoor navigationsystem based on visible light and mobile phonerdquo IEEEPhotonics Journal vol 9 no 2 pp 1ndash11 2017

[18] M S Rahman M M Haque and K D Kim ldquoHigh precisionindoor positioning using lighting LED and image sensorrdquo inProceedings of the International Conference on Computer andInformation Technology Dhaka Bangladesh December 2011

[19] Q Liang J Lin and M Liu ldquoTowards robust visible lightpositioning under LED shortage by visual-inertial fusionrdquo inProceedings of the 2019 International Conference on IndoorPositioning and Indoor Navigation (IPIN) Pisa Italy October2019

10 International Journal of Optics

[20] Q Liang and M Liu ldquoA tightly coupled VLC-inertial lo-calization system by EKFrdquo IEEE Robotics and AutomationLetters vol 5 no 2 pp 3129ndash3136 2020

[21] W Guan Research on High Precision Indoor Visible LightLocalization Algorithm Based on Image sensor Masterrsquosesis South China University of Technology GuangdongProvince China 2019

[22] C Qiu B Hussain and C P Yue ldquoBluetooth based wirelesscontrol for iBeacon and VLC enabled lightingrdquo in Proceedingsof the 2019 IEEE 8th Global Conference on Consumer Elec-tronics (GCCE) Osaka Japan October 2019

[23] B Hussain C Qiu and C P Yue ldquoSmart lighting control andservices using visible light communication and Bluetoothrdquo inProceedings of the 2019 IEEE 8th Global Conference onConsumer Electronics (GCCE) Osaka Japan October 2019

[24] W Guan Z Liu S Wen H Xie and X Zhang ldquoVisible lightdynamic positioning method using improved camshift-kal-man algorithmrdquo IEEE Photonics Journal vol 11 no 6pp 1ndash22 2019

[25] C Xie ldquoe LED-ID detection and recognition method basedon visible light positioning using proximity methodrdquo IEEEPhotonics vol 10 no 2 p 7902116 2018

[26] G Weipeng S Wen L Liu and H Zhang ldquoHigh-precisionindoor positioning algorithm based on visible light com-munication using complementary metal-ndashoxidendashsemiconductor image sensorrdquo Optical Engineeringvol 58 p 1 2019

International Journal of Optics 11

Page 6: High-Accuracy and Real-Time Indoor Positioning System ...

And the z coordinate is

z z1 minus H z2 minus H z3 minus H (6)

According to equations (5) and (6) we can calculate theposition of the camera lens center O Meanwhile the co-ordinates of the robot can be calculated through the staticconversion relationship between the camera and the robotbase e analysis of the algorithm could be found in ourprevious work [26]

3 Implementation of the VLP-BasedRobot System

31 System Setup and Implementation e system setup ofthe VLP-based robot system is shown in Table 1 Mobile robot(Turtlebot3) was used to build the VLP-based system plat-form e images of LEDs were shot by MindVision UB-300industrial camera which is fixed vertically on the robot byprior extrinsic calibration and transmitted by Raspberry Pi 3

Model B As shown in Figure 9 we implemented the system inthe office with three smart LED lamps installed on the ceilingfor positioning e related information of LED lamps isshown in Figure 9 In positioning process the image sensorcan detect at least three LEDs to ensure the realization of thetriple-light positioning Algorithm And for the weak pro-cessor performance of Raspberry Pi 3 Model B the programof image processing and location calculation is run on aremote controller e operating system of the Turtlebot3robot is Ubuntu 1604 MATE which corresponds to theKinetic version of ROS and the system of the remote con-troller is Ubuntu 1604 desktop We demonstrate the per-formance of the VLP robot system on ROS e camerainstalled vertically on the robot captures the image and thenobtains the location information stored by the LED imme-diately through the LED-ROI dynamic tracking algorithmand LED-ID recognition and finally realizes the centimeter-level location through the triple-light positioning algorithmPurple dots are used to represent the positioning result of thegeometric center of the camera obtained by VLP

Interference

LED-ROI obtained by thedynamic tracking algorithm

(a) (b)

(c) (d)

Figure 7 Dynamic tracking performance with modulator tube as background interference of the improved CAMshift-Kalman algorithm

6 International Journal of Optics

32 Positioning Accuracy of the VLP-Based Robot SystemIn order to evaluate the positioning accuracy of the systemtwo series of experiments were carried out e first serieswas used to test the performance of the stationary robot Asshown in Figure 10 uniformly distributed points in theexperimental area were randomly selected to calculate thestandard deviation between the measured position and the

actual position For the positioning accuracy shown inFigure 10 90 positioning errors are less than 3231 cm themaximum positioning error is no more than 5 cm and theaverage positioning error is 214 cm

We test the performance for moving mobile robot in thesecond series of experiments and we control the robot to gostraight and travel a distance after turning to demonstrate

LED1 (x1 y1 z1) LED2 (x2 y2 z2)

(x1 y1 z1)

(x3 y3 z3)

(x2 y2 z2)

LED3 (x3 y3 z3)

Camera

(i2 f2)

(i3 f3)

(i1 f1)

Image plane

H

f

a

b c

bprime

aprime

cprimeOprime

Okprime

LensO

dok

dprimeok

(i3 j3)

(i1 j1)(i2 j2)

Figure 8 e procedure of triple-light positioning

Table 1 Parameters in this paper

Camera specificationsModel MindVision UB-300Pixel (HtimesV) 2048times1536Time of exposure 002msType of shutter acquisition mode Electronic rolling shutterAcquisition mode Successive and soft trigger

Turtlebot3 robot specificationsModule Raspberry pi 3 BCPU Quad core 12 GHz broadcom BCM2837RAM 1GBOperating system Ubuntu mate 1604

Remote controller specificationsModule Acer VN7-593GCPU Quad core Intelreg Coretrade i7-7700HQOperating system Ubuntu 1604 LTS

System platform specificationsSize (LtimesWtimesH) 146times146times 285cm3

LED specificationsCoordinates of LED1 (cm) (13 159 285)Coordinates of LED2 (cm) (159 159 285)Coordinates of LED3 (cm) (159 13 285)e half-power angles of LEDdeg (ψ12) 60Current of each LED 300mARated power 18WOptical output 1500lm 6000KLED modulation frequency 5 kHzDiameter of each LED 150mm

International Journal of Optics 7

LED lamp

Laser radar

Industrial camera

Wheel

Raspbeery Pi

Open CR

Battery

Electricmachinery

TurtleBot3 withindustrial camera

(00)

x

y

LED3 5KHZ (15913)

LED2 3KHZ (159159)

LED1 1KHZ (13159)

Figure 9 VLP-based system implementation platform

80

60

40

20

0

4020

0ndash20

ndash40

200

ndash20ndash40

40

x (cm)y (cm)

z (cm

)

Real positionEstimated position

(a)

ndash40ndash40

ndash30 ndash20 ndash10 0 10 20 30 40

ndash30

ndash20

ndash10

0

10

20

30

40

y (cm

)

x (cm)

Real positionEstimated position

(b)

30

25

20

15

10

5

00 05 1 15 2 25 3 35 4 45 5

Error (cm)

e n

umbe

r of p

ositi

onin

g er

ror

(c)

1

09

08

07

06

05

04

03

02

01

00 1 2 3 4 5 6

Position Error (cm)

e c

df o

f pos

ition

erro

r (

) X 3231Y 0901

(d)

Figure 10 Stationary positioning of VLP-based robot system (a)e 3D positioning results (b)e horizontal view of the 3-D positioningresults (c) Histogram of the positioning error in the system (d) e cumulative distribution function (CDF) curves of positioning errors

8 International Journal of Optics

the real positioning effect of the localization system withdifferent speed As shown in Figure 11 the measured po-sitions of the robot are plotted as purple dots and the robotindustrial camera are highly coincident with the positioningresults in different motion states in the refined section of therobot which reflects the good positioning effect of the VLCpositioning system on ROS mobile robot Due to the re-strictions of ground field we let the robot travel at the speedof 2ndash4 cms However the moving speed of the robot has noeffect on the positioning accuracy but depends on the real-time positioning which is discussed in Section 33 It is worthmentioning that the position results in the coordinates of thecenter of the camera fixed vertically on the robot which canbe transformed to the center of the robot by coordinatetransformation in the VLP robot system

33 Real-Time Positioning Speed of the VLP-Based RobotSystem Location speed is another key factor in the VLP-based robot system which represents the mobile robotrsquosmoving speed when receiving VLC information andcalculating the current position in time erefore for theIS-based VLP system before the mobile robot passesthrough the VLP information transmitter it needs tocapture and extract the VLP information that is beforethe VLC light leaves the field of view (FOV) of the CMOSimage sensor erefore the maximum supportedmoving speed is the speed at which the mobile robotextracts the VLP information over a period of time from

the edge of the leftmost to the right We assume that thetarget LED is tangent to the left edge of the image in frameN and to the right edge of the image in frame N + 1 asshown in Figure 12 and the maximum velocity v of themobile robot is defined as v st where s is the distancethe terminal device moves between two frames and t isthe time from the beginning of the image capture to thecompletion of the positioning calculation Based on theproportional relationship between image coordinates andworld coordinates the relation is expressed as sr Ddwhere r is the pixel length of the image D is the actualdiameter of the LED and D is the diameter of the LED inthe image

As mentioned in Section 31 due to the weak pro-cessing capacity of Raspberry Pi 3 in Turtlebot3 most ofthe image process cannot be directly performed onRaspberry Pi erefore the images captured by theCMOS camera in the robot are transmitted to a remotelaptop for processing leading to positioning delays Weset the timer and counter in the program and the averagetime t is 03893 s As can be seen from Table 1 the actualdiameter D of LED is 150 mm e pixel length r of theimage is 800 (pixels) and the LED diameter d in theimage is 5554 (pixels) According to the definition ofspeed the maximum movement speed of the positioningterminal is 9497 ms vasymp 20 kmh If the robot isequipped with a small computer with better computingpower it can save the time it takes to transmit data andsupport higher moving speeds

Figure 11 e positioning effects for mobile robot moving along different tracks with different speeds of VLP-based robot system

Frame N

(a)

Frame N + 1

(b)

Figure 12 e relative position of the LED in two consecutive frames

International Journal of Optics 9

4 Conclusion

In this paper we design a high-accuracy and real-timeVLP-based robot system on ROS and implement theindoor VLP system based on TurtleBot3 robot platform inthe office as a practical application scenario which isequipped with designed LED lamps having VLC andBluetooth control function e implementation of thesystem shows that the positioning accuracy of the VLP-based system can reach cm-level position accuracy of214 cm and have a good real-time performance that cansupport the moving speed up to 20 kmh which has agreat research prospect in the autonomous positioningand navigation of indoor robots In the future we willapply the VLP-based system to the fusion localization orrobot navigation of ROS making full use of the advan-tages of VLP while making up for the defects of visiblelight positioning

Data Availability

e source code of our finished article is not open to theoutside world and the experimental data have been reflectedin the Experimental Analysis section

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Xianmin Li contributed to the conception of the studyperformed the data analysis and helped perform the analysiswith constructive discussions Zihong Yan contributedsignificantly to the analysis and manuscript preparation andwrote the manuscript Linyi Huang performed the experi-ment and analysis with constructive discussions and wrotethe manuscript Shihuan Chen helped revise the manuscriptManxi Liu helped perform the analysis with constructivediscussions Xianmin Li Zihong Yan and Linyi Huangcontributed equally to this work

References

[1] W Guan S Chen S Wen Z Tan H Song and W HouldquoHigh-accuracy robot indoor localization scheme based onrobot operating system using visible light positioningrdquo IEEEPhotonics Journal vol 12 no 2 pp 1ndash16 2020

[2] J Civera A J Davison and J Montiel ldquoInverse depth pa-rametrization for monocular slamrdquo IEEE Transactions onRobotics vol 24 no 5 p 932 2008

[3] M Liu R Siegwart and D P Fact ldquoTowards topologicalmapping and scene recognition with color for omnidirec-tional camerardquo in Proceedings of the International Conferenceon Robotics and Automation (ICRA) Saint Paul MN USAMay 2012

[4] P Henry M Krainin E Herbst X Ren and D Fox ldquoRGB-Dmapping using depth cameras for dense 3D modeling ofindoor environmentsrdquo in Proceedings of the 12th Interna-tional Symposium on Experimental Robotics ISER 2010 NewDelhi India December 2010

[5] M Liu F Pomerleau F Colas and R Siegwart ldquoNormalestimation for pointcloud using GPU based sparse tensorvotingrdquo in Proceedings of the IEEE International Conferenceon Robotics and Biomimetics (ROBIO) Shenzhen ChinaDecember 2013

[6] R Miyagusuku A Yamashita and H Asama ldquoData infor-mation fusion from multiple access points for WiFi-basedself-localizationrdquo IEEE Robotics and Automation Lettersvol 4 no 2 pp 269ndash276 2019

[7] H Chen W Guan S Li and Y Wu ldquoIndoor high precisionthree-dimensional positioning system based on visible lightcommunication using modified genetic algorithmrdquo OpticsCommunications vol 413 pp 103ndash120 2018

[8] W Guan ldquoA novel three-dimensional indoor positioningalgorithm design based on visible light communicationrdquoOptics Communication vol 392 pp 282ndash293 2017

[9] S Juneja and S Vashisth ldquoIndoor positioning system usingvisible light communicationrdquo in Proceedings of the 2017 In-ternational Conference on Computing and CommunicationTechnologies for Smart Nation (IC3TSN) Gurgaon IndiaOctober 2017

[10] S Chen W Guan Z Tan et al ldquoHigh accuracy and erroranalysis of indoor visible light positioning algorithm based onimage sensorrdquo 2019 httparxivorgabs191111773

[11] L Bai Y Yang C Feng et al ldquoAn enhanced camera assistedreceived signal strength ratio algorithm for indoor visible lightpositioningrdquo in Proceedings of the 2020 IEEE InternationalConference On Communications Workshops (Icc Workshops)Dublin Ireland June 2020

[12] A H A Bakar T Glass H Y Tee F Alam and M LeggldquoAccurate visible light positioning using multiple photodiodereceiver and machine learningrdquo IEEE Transactions on In-strumentation and Measurement 2020

[13] L Li P Hu C Peng G Shen and F Zhao ldquoEpsilon a visiblelight based positioning systemrdquo in Proceedings of the 11thUSENIX Conference on Networked Systems Design andImplementation Berkeley CA USA April 2014

[14] N Rajagopal P Lazik and A Rowe ldquoVisual light landmarksfor mobile devicesrdquo in Proceedings of the 13th InternationalSymposium Information Processing in Sensor Networks BerlinGermany April 2014

[15] Y-S Kuo P Pannuto K-J Hsiao and P Dutta ldquoLuxaposeindoor positioning with mobile phones and visible lightrdquo inProceedings of the 20th Annual International Conference onHigh Performance Computing Goa India December 2014

[16] Z Yang Z Wang J Zhang C Huang and Q ZhangldquoWearables can afford light-weight indoor positioning withvisible lightrdquo in Proceedings of the 13th Annual InternationalConference on Mobile Systems Applications and ServicesFlorence Italy May 2015

[17] J Fang Z Yang S Long et al ldquoHigh-speed indoor navigationsystem based on visible light and mobile phonerdquo IEEEPhotonics Journal vol 9 no 2 pp 1ndash11 2017

[18] M S Rahman M M Haque and K D Kim ldquoHigh precisionindoor positioning using lighting LED and image sensorrdquo inProceedings of the International Conference on Computer andInformation Technology Dhaka Bangladesh December 2011

[19] Q Liang J Lin and M Liu ldquoTowards robust visible lightpositioning under LED shortage by visual-inertial fusionrdquo inProceedings of the 2019 International Conference on IndoorPositioning and Indoor Navigation (IPIN) Pisa Italy October2019

10 International Journal of Optics

[20] Q Liang and M Liu ldquoA tightly coupled VLC-inertial lo-calization system by EKFrdquo IEEE Robotics and AutomationLetters vol 5 no 2 pp 3129ndash3136 2020

[21] W Guan Research on High Precision Indoor Visible LightLocalization Algorithm Based on Image sensor Masterrsquosesis South China University of Technology GuangdongProvince China 2019

[22] C Qiu B Hussain and C P Yue ldquoBluetooth based wirelesscontrol for iBeacon and VLC enabled lightingrdquo in Proceedingsof the 2019 IEEE 8th Global Conference on Consumer Elec-tronics (GCCE) Osaka Japan October 2019

[23] B Hussain C Qiu and C P Yue ldquoSmart lighting control andservices using visible light communication and Bluetoothrdquo inProceedings of the 2019 IEEE 8th Global Conference onConsumer Electronics (GCCE) Osaka Japan October 2019

[24] W Guan Z Liu S Wen H Xie and X Zhang ldquoVisible lightdynamic positioning method using improved camshift-kal-man algorithmrdquo IEEE Photonics Journal vol 11 no 6pp 1ndash22 2019

[25] C Xie ldquoe LED-ID detection and recognition method basedon visible light positioning using proximity methodrdquo IEEEPhotonics vol 10 no 2 p 7902116 2018

[26] G Weipeng S Wen L Liu and H Zhang ldquoHigh-precisionindoor positioning algorithm based on visible light com-munication using complementary metal-ndashoxidendashsemiconductor image sensorrdquo Optical Engineeringvol 58 p 1 2019

International Journal of Optics 11

Page 7: High-Accuracy and Real-Time Indoor Positioning System ...

32 Positioning Accuracy of the VLP-Based Robot SystemIn order to evaluate the positioning accuracy of the systemtwo series of experiments were carried out e first serieswas used to test the performance of the stationary robot Asshown in Figure 10 uniformly distributed points in theexperimental area were randomly selected to calculate thestandard deviation between the measured position and the

actual position For the positioning accuracy shown inFigure 10 90 positioning errors are less than 3231 cm themaximum positioning error is no more than 5 cm and theaverage positioning error is 214 cm

We test the performance for moving mobile robot in thesecond series of experiments and we control the robot to gostraight and travel a distance after turning to demonstrate

LED1 (x1 y1 z1) LED2 (x2 y2 z2)

(x1 y1 z1)

(x3 y3 z3)

(x2 y2 z2)

LED3 (x3 y3 z3)

Camera

(i2 f2)

(i3 f3)

(i1 f1)

Image plane

H

f

a

b c

bprime

aprime

cprimeOprime

Okprime

LensO

dok

dprimeok

(i3 j3)

(i1 j1)(i2 j2)

Figure 8 e procedure of triple-light positioning

Table 1 Parameters in this paper

Camera specificationsModel MindVision UB-300Pixel (HtimesV) 2048times1536Time of exposure 002msType of shutter acquisition mode Electronic rolling shutterAcquisition mode Successive and soft trigger

Turtlebot3 robot specificationsModule Raspberry pi 3 BCPU Quad core 12 GHz broadcom BCM2837RAM 1GBOperating system Ubuntu mate 1604

Remote controller specificationsModule Acer VN7-593GCPU Quad core Intelreg Coretrade i7-7700HQOperating system Ubuntu 1604 LTS

System platform specificationsSize (LtimesWtimesH) 146times146times 285cm3

LED specificationsCoordinates of LED1 (cm) (13 159 285)Coordinates of LED2 (cm) (159 159 285)Coordinates of LED3 (cm) (159 13 285)e half-power angles of LEDdeg (ψ12) 60Current of each LED 300mARated power 18WOptical output 1500lm 6000KLED modulation frequency 5 kHzDiameter of each LED 150mm

International Journal of Optics 7

LED lamp

Laser radar

Industrial camera

Wheel

Raspbeery Pi

Open CR

Battery

Electricmachinery

TurtleBot3 withindustrial camera

(00)

x

y

LED3 5KHZ (15913)

LED2 3KHZ (159159)

LED1 1KHZ (13159)

Figure 9 VLP-based system implementation platform

80

60

40

20

0

4020

0ndash20

ndash40

200

ndash20ndash40

40

x (cm)y (cm)

z (cm

)

Real positionEstimated position

(a)

ndash40ndash40

ndash30 ndash20 ndash10 0 10 20 30 40

ndash30

ndash20

ndash10

0

10

20

30

40

y (cm

)

x (cm)

Real positionEstimated position

(b)

30

25

20

15

10

5

00 05 1 15 2 25 3 35 4 45 5

Error (cm)

e n

umbe

r of p

ositi

onin

g er

ror

(c)

1

09

08

07

06

05

04

03

02

01

00 1 2 3 4 5 6

Position Error (cm)

e c

df o

f pos

ition

erro

r (

) X 3231Y 0901

(d)

Figure 10 Stationary positioning of VLP-based robot system (a)e 3D positioning results (b)e horizontal view of the 3-D positioningresults (c) Histogram of the positioning error in the system (d) e cumulative distribution function (CDF) curves of positioning errors

8 International Journal of Optics

the real positioning effect of the localization system withdifferent speed As shown in Figure 11 the measured po-sitions of the robot are plotted as purple dots and the robotindustrial camera are highly coincident with the positioningresults in different motion states in the refined section of therobot which reflects the good positioning effect of the VLCpositioning system on ROS mobile robot Due to the re-strictions of ground field we let the robot travel at the speedof 2ndash4 cms However the moving speed of the robot has noeffect on the positioning accuracy but depends on the real-time positioning which is discussed in Section 33 It is worthmentioning that the position results in the coordinates of thecenter of the camera fixed vertically on the robot which canbe transformed to the center of the robot by coordinatetransformation in the VLP robot system

33 Real-Time Positioning Speed of the VLP-Based RobotSystem Location speed is another key factor in the VLP-based robot system which represents the mobile robotrsquosmoving speed when receiving VLC information andcalculating the current position in time erefore for theIS-based VLP system before the mobile robot passesthrough the VLP information transmitter it needs tocapture and extract the VLP information that is beforethe VLC light leaves the field of view (FOV) of the CMOSimage sensor erefore the maximum supportedmoving speed is the speed at which the mobile robotextracts the VLP information over a period of time from

the edge of the leftmost to the right We assume that thetarget LED is tangent to the left edge of the image in frameN and to the right edge of the image in frame N + 1 asshown in Figure 12 and the maximum velocity v of themobile robot is defined as v st where s is the distancethe terminal device moves between two frames and t isthe time from the beginning of the image capture to thecompletion of the positioning calculation Based on theproportional relationship between image coordinates andworld coordinates the relation is expressed as sr Ddwhere r is the pixel length of the image D is the actualdiameter of the LED and D is the diameter of the LED inthe image

As mentioned in Section 31 due to the weak pro-cessing capacity of Raspberry Pi 3 in Turtlebot3 most ofthe image process cannot be directly performed onRaspberry Pi erefore the images captured by theCMOS camera in the robot are transmitted to a remotelaptop for processing leading to positioning delays Weset the timer and counter in the program and the averagetime t is 03893 s As can be seen from Table 1 the actualdiameter D of LED is 150 mm e pixel length r of theimage is 800 (pixels) and the LED diameter d in theimage is 5554 (pixels) According to the definition ofspeed the maximum movement speed of the positioningterminal is 9497 ms vasymp 20 kmh If the robot isequipped with a small computer with better computingpower it can save the time it takes to transmit data andsupport higher moving speeds

Figure 11 e positioning effects for mobile robot moving along different tracks with different speeds of VLP-based robot system

Frame N

(a)

Frame N + 1

(b)

Figure 12 e relative position of the LED in two consecutive frames

International Journal of Optics 9

4 Conclusion

In this paper we design a high-accuracy and real-timeVLP-based robot system on ROS and implement theindoor VLP system based on TurtleBot3 robot platform inthe office as a practical application scenario which isequipped with designed LED lamps having VLC andBluetooth control function e implementation of thesystem shows that the positioning accuracy of the VLP-based system can reach cm-level position accuracy of214 cm and have a good real-time performance that cansupport the moving speed up to 20 kmh which has agreat research prospect in the autonomous positioningand navigation of indoor robots In the future we willapply the VLP-based system to the fusion localization orrobot navigation of ROS making full use of the advan-tages of VLP while making up for the defects of visiblelight positioning

Data Availability

e source code of our finished article is not open to theoutside world and the experimental data have been reflectedin the Experimental Analysis section

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Xianmin Li contributed to the conception of the studyperformed the data analysis and helped perform the analysiswith constructive discussions Zihong Yan contributedsignificantly to the analysis and manuscript preparation andwrote the manuscript Linyi Huang performed the experi-ment and analysis with constructive discussions and wrotethe manuscript Shihuan Chen helped revise the manuscriptManxi Liu helped perform the analysis with constructivediscussions Xianmin Li Zihong Yan and Linyi Huangcontributed equally to this work

References

[1] W Guan S Chen S Wen Z Tan H Song and W HouldquoHigh-accuracy robot indoor localization scheme based onrobot operating system using visible light positioningrdquo IEEEPhotonics Journal vol 12 no 2 pp 1ndash16 2020

[2] J Civera A J Davison and J Montiel ldquoInverse depth pa-rametrization for monocular slamrdquo IEEE Transactions onRobotics vol 24 no 5 p 932 2008

[3] M Liu R Siegwart and D P Fact ldquoTowards topologicalmapping and scene recognition with color for omnidirec-tional camerardquo in Proceedings of the International Conferenceon Robotics and Automation (ICRA) Saint Paul MN USAMay 2012

[4] P Henry M Krainin E Herbst X Ren and D Fox ldquoRGB-Dmapping using depth cameras for dense 3D modeling ofindoor environmentsrdquo in Proceedings of the 12th Interna-tional Symposium on Experimental Robotics ISER 2010 NewDelhi India December 2010

[5] M Liu F Pomerleau F Colas and R Siegwart ldquoNormalestimation for pointcloud using GPU based sparse tensorvotingrdquo in Proceedings of the IEEE International Conferenceon Robotics and Biomimetics (ROBIO) Shenzhen ChinaDecember 2013

[6] R Miyagusuku A Yamashita and H Asama ldquoData infor-mation fusion from multiple access points for WiFi-basedself-localizationrdquo IEEE Robotics and Automation Lettersvol 4 no 2 pp 269ndash276 2019

[7] H Chen W Guan S Li and Y Wu ldquoIndoor high precisionthree-dimensional positioning system based on visible lightcommunication using modified genetic algorithmrdquo OpticsCommunications vol 413 pp 103ndash120 2018

[8] W Guan ldquoA novel three-dimensional indoor positioningalgorithm design based on visible light communicationrdquoOptics Communication vol 392 pp 282ndash293 2017

[9] S Juneja and S Vashisth ldquoIndoor positioning system usingvisible light communicationrdquo in Proceedings of the 2017 In-ternational Conference on Computing and CommunicationTechnologies for Smart Nation (IC3TSN) Gurgaon IndiaOctober 2017

[10] S Chen W Guan Z Tan et al ldquoHigh accuracy and erroranalysis of indoor visible light positioning algorithm based onimage sensorrdquo 2019 httparxivorgabs191111773

[11] L Bai Y Yang C Feng et al ldquoAn enhanced camera assistedreceived signal strength ratio algorithm for indoor visible lightpositioningrdquo in Proceedings of the 2020 IEEE InternationalConference On Communications Workshops (Icc Workshops)Dublin Ireland June 2020

[12] A H A Bakar T Glass H Y Tee F Alam and M LeggldquoAccurate visible light positioning using multiple photodiodereceiver and machine learningrdquo IEEE Transactions on In-strumentation and Measurement 2020

[13] L Li P Hu C Peng G Shen and F Zhao ldquoEpsilon a visiblelight based positioning systemrdquo in Proceedings of the 11thUSENIX Conference on Networked Systems Design andImplementation Berkeley CA USA April 2014

[14] N Rajagopal P Lazik and A Rowe ldquoVisual light landmarksfor mobile devicesrdquo in Proceedings of the 13th InternationalSymposium Information Processing in Sensor Networks BerlinGermany April 2014

[15] Y-S Kuo P Pannuto K-J Hsiao and P Dutta ldquoLuxaposeindoor positioning with mobile phones and visible lightrdquo inProceedings of the 20th Annual International Conference onHigh Performance Computing Goa India December 2014

[16] Z Yang Z Wang J Zhang C Huang and Q ZhangldquoWearables can afford light-weight indoor positioning withvisible lightrdquo in Proceedings of the 13th Annual InternationalConference on Mobile Systems Applications and ServicesFlorence Italy May 2015

[17] J Fang Z Yang S Long et al ldquoHigh-speed indoor navigationsystem based on visible light and mobile phonerdquo IEEEPhotonics Journal vol 9 no 2 pp 1ndash11 2017

[18] M S Rahman M M Haque and K D Kim ldquoHigh precisionindoor positioning using lighting LED and image sensorrdquo inProceedings of the International Conference on Computer andInformation Technology Dhaka Bangladesh December 2011

[19] Q Liang J Lin and M Liu ldquoTowards robust visible lightpositioning under LED shortage by visual-inertial fusionrdquo inProceedings of the 2019 International Conference on IndoorPositioning and Indoor Navigation (IPIN) Pisa Italy October2019

10 International Journal of Optics

[20] Q Liang and M Liu ldquoA tightly coupled VLC-inertial lo-calization system by EKFrdquo IEEE Robotics and AutomationLetters vol 5 no 2 pp 3129ndash3136 2020

[21] W Guan Research on High Precision Indoor Visible LightLocalization Algorithm Based on Image sensor Masterrsquosesis South China University of Technology GuangdongProvince China 2019

[22] C Qiu B Hussain and C P Yue ldquoBluetooth based wirelesscontrol for iBeacon and VLC enabled lightingrdquo in Proceedingsof the 2019 IEEE 8th Global Conference on Consumer Elec-tronics (GCCE) Osaka Japan October 2019

[23] B Hussain C Qiu and C P Yue ldquoSmart lighting control andservices using visible light communication and Bluetoothrdquo inProceedings of the 2019 IEEE 8th Global Conference onConsumer Electronics (GCCE) Osaka Japan October 2019

[24] W Guan Z Liu S Wen H Xie and X Zhang ldquoVisible lightdynamic positioning method using improved camshift-kal-man algorithmrdquo IEEE Photonics Journal vol 11 no 6pp 1ndash22 2019

[25] C Xie ldquoe LED-ID detection and recognition method basedon visible light positioning using proximity methodrdquo IEEEPhotonics vol 10 no 2 p 7902116 2018

[26] G Weipeng S Wen L Liu and H Zhang ldquoHigh-precisionindoor positioning algorithm based on visible light com-munication using complementary metal-ndashoxidendashsemiconductor image sensorrdquo Optical Engineeringvol 58 p 1 2019

International Journal of Optics 11

Page 8: High-Accuracy and Real-Time Indoor Positioning System ...

LED lamp

Laser radar

Industrial camera

Wheel

Raspbeery Pi

Open CR

Battery

Electricmachinery

TurtleBot3 withindustrial camera

(00)

x

y

LED3 5KHZ (15913)

LED2 3KHZ (159159)

LED1 1KHZ (13159)

Figure 9 VLP-based system implementation platform

80

60

40

20

0

4020

0ndash20

ndash40

200

ndash20ndash40

40

x (cm)y (cm)

z (cm

)

Real positionEstimated position

(a)

ndash40ndash40

ndash30 ndash20 ndash10 0 10 20 30 40

ndash30

ndash20

ndash10

0

10

20

30

40

y (cm

)

x (cm)

Real positionEstimated position

(b)

30

25

20

15

10

5

00 05 1 15 2 25 3 35 4 45 5

Error (cm)

e n

umbe

r of p

ositi

onin

g er

ror

(c)

1

09

08

07

06

05

04

03

02

01

00 1 2 3 4 5 6

Position Error (cm)

e c

df o

f pos

ition

erro

r (

) X 3231Y 0901

(d)

Figure 10 Stationary positioning of VLP-based robot system (a)e 3D positioning results (b)e horizontal view of the 3-D positioningresults (c) Histogram of the positioning error in the system (d) e cumulative distribution function (CDF) curves of positioning errors

8 International Journal of Optics

the real positioning effect of the localization system withdifferent speed As shown in Figure 11 the measured po-sitions of the robot are plotted as purple dots and the robotindustrial camera are highly coincident with the positioningresults in different motion states in the refined section of therobot which reflects the good positioning effect of the VLCpositioning system on ROS mobile robot Due to the re-strictions of ground field we let the robot travel at the speedof 2ndash4 cms However the moving speed of the robot has noeffect on the positioning accuracy but depends on the real-time positioning which is discussed in Section 33 It is worthmentioning that the position results in the coordinates of thecenter of the camera fixed vertically on the robot which canbe transformed to the center of the robot by coordinatetransformation in the VLP robot system

33 Real-Time Positioning Speed of the VLP-Based RobotSystem Location speed is another key factor in the VLP-based robot system which represents the mobile robotrsquosmoving speed when receiving VLC information andcalculating the current position in time erefore for theIS-based VLP system before the mobile robot passesthrough the VLP information transmitter it needs tocapture and extract the VLP information that is beforethe VLC light leaves the field of view (FOV) of the CMOSimage sensor erefore the maximum supportedmoving speed is the speed at which the mobile robotextracts the VLP information over a period of time from

the edge of the leftmost to the right We assume that thetarget LED is tangent to the left edge of the image in frameN and to the right edge of the image in frame N + 1 asshown in Figure 12 and the maximum velocity v of themobile robot is defined as v st where s is the distancethe terminal device moves between two frames and t isthe time from the beginning of the image capture to thecompletion of the positioning calculation Based on theproportional relationship between image coordinates andworld coordinates the relation is expressed as sr Ddwhere r is the pixel length of the image D is the actualdiameter of the LED and D is the diameter of the LED inthe image

As mentioned in Section 31 due to the weak pro-cessing capacity of Raspberry Pi 3 in Turtlebot3 most ofthe image process cannot be directly performed onRaspberry Pi erefore the images captured by theCMOS camera in the robot are transmitted to a remotelaptop for processing leading to positioning delays Weset the timer and counter in the program and the averagetime t is 03893 s As can be seen from Table 1 the actualdiameter D of LED is 150 mm e pixel length r of theimage is 800 (pixels) and the LED diameter d in theimage is 5554 (pixels) According to the definition ofspeed the maximum movement speed of the positioningterminal is 9497 ms vasymp 20 kmh If the robot isequipped with a small computer with better computingpower it can save the time it takes to transmit data andsupport higher moving speeds

Figure 11 e positioning effects for mobile robot moving along different tracks with different speeds of VLP-based robot system

Frame N

(a)

Frame N + 1

(b)

Figure 12 e relative position of the LED in two consecutive frames

International Journal of Optics 9

4 Conclusion

In this paper we design a high-accuracy and real-timeVLP-based robot system on ROS and implement theindoor VLP system based on TurtleBot3 robot platform inthe office as a practical application scenario which isequipped with designed LED lamps having VLC andBluetooth control function e implementation of thesystem shows that the positioning accuracy of the VLP-based system can reach cm-level position accuracy of214 cm and have a good real-time performance that cansupport the moving speed up to 20 kmh which has agreat research prospect in the autonomous positioningand navigation of indoor robots In the future we willapply the VLP-based system to the fusion localization orrobot navigation of ROS making full use of the advan-tages of VLP while making up for the defects of visiblelight positioning

Data Availability

e source code of our finished article is not open to theoutside world and the experimental data have been reflectedin the Experimental Analysis section

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Xianmin Li contributed to the conception of the studyperformed the data analysis and helped perform the analysiswith constructive discussions Zihong Yan contributedsignificantly to the analysis and manuscript preparation andwrote the manuscript Linyi Huang performed the experi-ment and analysis with constructive discussions and wrotethe manuscript Shihuan Chen helped revise the manuscriptManxi Liu helped perform the analysis with constructivediscussions Xianmin Li Zihong Yan and Linyi Huangcontributed equally to this work

References

[1] W Guan S Chen S Wen Z Tan H Song and W HouldquoHigh-accuracy robot indoor localization scheme based onrobot operating system using visible light positioningrdquo IEEEPhotonics Journal vol 12 no 2 pp 1ndash16 2020

[2] J Civera A J Davison and J Montiel ldquoInverse depth pa-rametrization for monocular slamrdquo IEEE Transactions onRobotics vol 24 no 5 p 932 2008

[3] M Liu R Siegwart and D P Fact ldquoTowards topologicalmapping and scene recognition with color for omnidirec-tional camerardquo in Proceedings of the International Conferenceon Robotics and Automation (ICRA) Saint Paul MN USAMay 2012

[4] P Henry M Krainin E Herbst X Ren and D Fox ldquoRGB-Dmapping using depth cameras for dense 3D modeling ofindoor environmentsrdquo in Proceedings of the 12th Interna-tional Symposium on Experimental Robotics ISER 2010 NewDelhi India December 2010

[5] M Liu F Pomerleau F Colas and R Siegwart ldquoNormalestimation for pointcloud using GPU based sparse tensorvotingrdquo in Proceedings of the IEEE International Conferenceon Robotics and Biomimetics (ROBIO) Shenzhen ChinaDecember 2013

[6] R Miyagusuku A Yamashita and H Asama ldquoData infor-mation fusion from multiple access points for WiFi-basedself-localizationrdquo IEEE Robotics and Automation Lettersvol 4 no 2 pp 269ndash276 2019

[7] H Chen W Guan S Li and Y Wu ldquoIndoor high precisionthree-dimensional positioning system based on visible lightcommunication using modified genetic algorithmrdquo OpticsCommunications vol 413 pp 103ndash120 2018

[8] W Guan ldquoA novel three-dimensional indoor positioningalgorithm design based on visible light communicationrdquoOptics Communication vol 392 pp 282ndash293 2017

[9] S Juneja and S Vashisth ldquoIndoor positioning system usingvisible light communicationrdquo in Proceedings of the 2017 In-ternational Conference on Computing and CommunicationTechnologies for Smart Nation (IC3TSN) Gurgaon IndiaOctober 2017

[10] S Chen W Guan Z Tan et al ldquoHigh accuracy and erroranalysis of indoor visible light positioning algorithm based onimage sensorrdquo 2019 httparxivorgabs191111773

[11] L Bai Y Yang C Feng et al ldquoAn enhanced camera assistedreceived signal strength ratio algorithm for indoor visible lightpositioningrdquo in Proceedings of the 2020 IEEE InternationalConference On Communications Workshops (Icc Workshops)Dublin Ireland June 2020

[12] A H A Bakar T Glass H Y Tee F Alam and M LeggldquoAccurate visible light positioning using multiple photodiodereceiver and machine learningrdquo IEEE Transactions on In-strumentation and Measurement 2020

[13] L Li P Hu C Peng G Shen and F Zhao ldquoEpsilon a visiblelight based positioning systemrdquo in Proceedings of the 11thUSENIX Conference on Networked Systems Design andImplementation Berkeley CA USA April 2014

[14] N Rajagopal P Lazik and A Rowe ldquoVisual light landmarksfor mobile devicesrdquo in Proceedings of the 13th InternationalSymposium Information Processing in Sensor Networks BerlinGermany April 2014

[15] Y-S Kuo P Pannuto K-J Hsiao and P Dutta ldquoLuxaposeindoor positioning with mobile phones and visible lightrdquo inProceedings of the 20th Annual International Conference onHigh Performance Computing Goa India December 2014

[16] Z Yang Z Wang J Zhang C Huang and Q ZhangldquoWearables can afford light-weight indoor positioning withvisible lightrdquo in Proceedings of the 13th Annual InternationalConference on Mobile Systems Applications and ServicesFlorence Italy May 2015

[17] J Fang Z Yang S Long et al ldquoHigh-speed indoor navigationsystem based on visible light and mobile phonerdquo IEEEPhotonics Journal vol 9 no 2 pp 1ndash11 2017

[18] M S Rahman M M Haque and K D Kim ldquoHigh precisionindoor positioning using lighting LED and image sensorrdquo inProceedings of the International Conference on Computer andInformation Technology Dhaka Bangladesh December 2011

[19] Q Liang J Lin and M Liu ldquoTowards robust visible lightpositioning under LED shortage by visual-inertial fusionrdquo inProceedings of the 2019 International Conference on IndoorPositioning and Indoor Navigation (IPIN) Pisa Italy October2019

10 International Journal of Optics

[20] Q Liang and M Liu ldquoA tightly coupled VLC-inertial lo-calization system by EKFrdquo IEEE Robotics and AutomationLetters vol 5 no 2 pp 3129ndash3136 2020

[21] W Guan Research on High Precision Indoor Visible LightLocalization Algorithm Based on Image sensor Masterrsquosesis South China University of Technology GuangdongProvince China 2019

[22] C Qiu B Hussain and C P Yue ldquoBluetooth based wirelesscontrol for iBeacon and VLC enabled lightingrdquo in Proceedingsof the 2019 IEEE 8th Global Conference on Consumer Elec-tronics (GCCE) Osaka Japan October 2019

[23] B Hussain C Qiu and C P Yue ldquoSmart lighting control andservices using visible light communication and Bluetoothrdquo inProceedings of the 2019 IEEE 8th Global Conference onConsumer Electronics (GCCE) Osaka Japan October 2019

[24] W Guan Z Liu S Wen H Xie and X Zhang ldquoVisible lightdynamic positioning method using improved camshift-kal-man algorithmrdquo IEEE Photonics Journal vol 11 no 6pp 1ndash22 2019

[25] C Xie ldquoe LED-ID detection and recognition method basedon visible light positioning using proximity methodrdquo IEEEPhotonics vol 10 no 2 p 7902116 2018

[26] G Weipeng S Wen L Liu and H Zhang ldquoHigh-precisionindoor positioning algorithm based on visible light com-munication using complementary metal-ndashoxidendashsemiconductor image sensorrdquo Optical Engineeringvol 58 p 1 2019

International Journal of Optics 11

Page 9: High-Accuracy and Real-Time Indoor Positioning System ...

the real positioning effect of the localization system withdifferent speed As shown in Figure 11 the measured po-sitions of the robot are plotted as purple dots and the robotindustrial camera are highly coincident with the positioningresults in different motion states in the refined section of therobot which reflects the good positioning effect of the VLCpositioning system on ROS mobile robot Due to the re-strictions of ground field we let the robot travel at the speedof 2ndash4 cms However the moving speed of the robot has noeffect on the positioning accuracy but depends on the real-time positioning which is discussed in Section 33 It is worthmentioning that the position results in the coordinates of thecenter of the camera fixed vertically on the robot which canbe transformed to the center of the robot by coordinatetransformation in the VLP robot system

33 Real-Time Positioning Speed of the VLP-Based RobotSystem Location speed is another key factor in the VLP-based robot system which represents the mobile robotrsquosmoving speed when receiving VLC information andcalculating the current position in time erefore for theIS-based VLP system before the mobile robot passesthrough the VLP information transmitter it needs tocapture and extract the VLP information that is beforethe VLC light leaves the field of view (FOV) of the CMOSimage sensor erefore the maximum supportedmoving speed is the speed at which the mobile robotextracts the VLP information over a period of time from

the edge of the leftmost to the right We assume that thetarget LED is tangent to the left edge of the image in frameN and to the right edge of the image in frame N + 1 asshown in Figure 12 and the maximum velocity v of themobile robot is defined as v st where s is the distancethe terminal device moves between two frames and t isthe time from the beginning of the image capture to thecompletion of the positioning calculation Based on theproportional relationship between image coordinates andworld coordinates the relation is expressed as sr Ddwhere r is the pixel length of the image D is the actualdiameter of the LED and D is the diameter of the LED inthe image

As mentioned in Section 31 due to the weak pro-cessing capacity of Raspberry Pi 3 in Turtlebot3 most ofthe image process cannot be directly performed onRaspberry Pi erefore the images captured by theCMOS camera in the robot are transmitted to a remotelaptop for processing leading to positioning delays Weset the timer and counter in the program and the averagetime t is 03893 s As can be seen from Table 1 the actualdiameter D of LED is 150 mm e pixel length r of theimage is 800 (pixels) and the LED diameter d in theimage is 5554 (pixels) According to the definition ofspeed the maximum movement speed of the positioningterminal is 9497 ms vasymp 20 kmh If the robot isequipped with a small computer with better computingpower it can save the time it takes to transmit data andsupport higher moving speeds

Figure 11 e positioning effects for mobile robot moving along different tracks with different speeds of VLP-based robot system

Frame N

(a)

Frame N + 1

(b)

Figure 12 e relative position of the LED in two consecutive frames

International Journal of Optics 9

4 Conclusion

In this paper we design a high-accuracy and real-timeVLP-based robot system on ROS and implement theindoor VLP system based on TurtleBot3 robot platform inthe office as a practical application scenario which isequipped with designed LED lamps having VLC andBluetooth control function e implementation of thesystem shows that the positioning accuracy of the VLP-based system can reach cm-level position accuracy of214 cm and have a good real-time performance that cansupport the moving speed up to 20 kmh which has agreat research prospect in the autonomous positioningand navigation of indoor robots In the future we willapply the VLP-based system to the fusion localization orrobot navigation of ROS making full use of the advan-tages of VLP while making up for the defects of visiblelight positioning

Data Availability

e source code of our finished article is not open to theoutside world and the experimental data have been reflectedin the Experimental Analysis section

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Xianmin Li contributed to the conception of the studyperformed the data analysis and helped perform the analysiswith constructive discussions Zihong Yan contributedsignificantly to the analysis and manuscript preparation andwrote the manuscript Linyi Huang performed the experi-ment and analysis with constructive discussions and wrotethe manuscript Shihuan Chen helped revise the manuscriptManxi Liu helped perform the analysis with constructivediscussions Xianmin Li Zihong Yan and Linyi Huangcontributed equally to this work

References

[1] W Guan S Chen S Wen Z Tan H Song and W HouldquoHigh-accuracy robot indoor localization scheme based onrobot operating system using visible light positioningrdquo IEEEPhotonics Journal vol 12 no 2 pp 1ndash16 2020

[2] J Civera A J Davison and J Montiel ldquoInverse depth pa-rametrization for monocular slamrdquo IEEE Transactions onRobotics vol 24 no 5 p 932 2008

[3] M Liu R Siegwart and D P Fact ldquoTowards topologicalmapping and scene recognition with color for omnidirec-tional camerardquo in Proceedings of the International Conferenceon Robotics and Automation (ICRA) Saint Paul MN USAMay 2012

[4] P Henry M Krainin E Herbst X Ren and D Fox ldquoRGB-Dmapping using depth cameras for dense 3D modeling ofindoor environmentsrdquo in Proceedings of the 12th Interna-tional Symposium on Experimental Robotics ISER 2010 NewDelhi India December 2010

[5] M Liu F Pomerleau F Colas and R Siegwart ldquoNormalestimation for pointcloud using GPU based sparse tensorvotingrdquo in Proceedings of the IEEE International Conferenceon Robotics and Biomimetics (ROBIO) Shenzhen ChinaDecember 2013

[6] R Miyagusuku A Yamashita and H Asama ldquoData infor-mation fusion from multiple access points for WiFi-basedself-localizationrdquo IEEE Robotics and Automation Lettersvol 4 no 2 pp 269ndash276 2019

[7] H Chen W Guan S Li and Y Wu ldquoIndoor high precisionthree-dimensional positioning system based on visible lightcommunication using modified genetic algorithmrdquo OpticsCommunications vol 413 pp 103ndash120 2018

[8] W Guan ldquoA novel three-dimensional indoor positioningalgorithm design based on visible light communicationrdquoOptics Communication vol 392 pp 282ndash293 2017

[9] S Juneja and S Vashisth ldquoIndoor positioning system usingvisible light communicationrdquo in Proceedings of the 2017 In-ternational Conference on Computing and CommunicationTechnologies for Smart Nation (IC3TSN) Gurgaon IndiaOctober 2017

[10] S Chen W Guan Z Tan et al ldquoHigh accuracy and erroranalysis of indoor visible light positioning algorithm based onimage sensorrdquo 2019 httparxivorgabs191111773

[11] L Bai Y Yang C Feng et al ldquoAn enhanced camera assistedreceived signal strength ratio algorithm for indoor visible lightpositioningrdquo in Proceedings of the 2020 IEEE InternationalConference On Communications Workshops (Icc Workshops)Dublin Ireland June 2020

[12] A H A Bakar T Glass H Y Tee F Alam and M LeggldquoAccurate visible light positioning using multiple photodiodereceiver and machine learningrdquo IEEE Transactions on In-strumentation and Measurement 2020

[13] L Li P Hu C Peng G Shen and F Zhao ldquoEpsilon a visiblelight based positioning systemrdquo in Proceedings of the 11thUSENIX Conference on Networked Systems Design andImplementation Berkeley CA USA April 2014

[14] N Rajagopal P Lazik and A Rowe ldquoVisual light landmarksfor mobile devicesrdquo in Proceedings of the 13th InternationalSymposium Information Processing in Sensor Networks BerlinGermany April 2014

[15] Y-S Kuo P Pannuto K-J Hsiao and P Dutta ldquoLuxaposeindoor positioning with mobile phones and visible lightrdquo inProceedings of the 20th Annual International Conference onHigh Performance Computing Goa India December 2014

[16] Z Yang Z Wang J Zhang C Huang and Q ZhangldquoWearables can afford light-weight indoor positioning withvisible lightrdquo in Proceedings of the 13th Annual InternationalConference on Mobile Systems Applications and ServicesFlorence Italy May 2015

[17] J Fang Z Yang S Long et al ldquoHigh-speed indoor navigationsystem based on visible light and mobile phonerdquo IEEEPhotonics Journal vol 9 no 2 pp 1ndash11 2017

[18] M S Rahman M M Haque and K D Kim ldquoHigh precisionindoor positioning using lighting LED and image sensorrdquo inProceedings of the International Conference on Computer andInformation Technology Dhaka Bangladesh December 2011

[19] Q Liang J Lin and M Liu ldquoTowards robust visible lightpositioning under LED shortage by visual-inertial fusionrdquo inProceedings of the 2019 International Conference on IndoorPositioning and Indoor Navigation (IPIN) Pisa Italy October2019

10 International Journal of Optics

[20] Q Liang and M Liu ldquoA tightly coupled VLC-inertial lo-calization system by EKFrdquo IEEE Robotics and AutomationLetters vol 5 no 2 pp 3129ndash3136 2020

[21] W Guan Research on High Precision Indoor Visible LightLocalization Algorithm Based on Image sensor Masterrsquosesis South China University of Technology GuangdongProvince China 2019

[22] C Qiu B Hussain and C P Yue ldquoBluetooth based wirelesscontrol for iBeacon and VLC enabled lightingrdquo in Proceedingsof the 2019 IEEE 8th Global Conference on Consumer Elec-tronics (GCCE) Osaka Japan October 2019

[23] B Hussain C Qiu and C P Yue ldquoSmart lighting control andservices using visible light communication and Bluetoothrdquo inProceedings of the 2019 IEEE 8th Global Conference onConsumer Electronics (GCCE) Osaka Japan October 2019

[24] W Guan Z Liu S Wen H Xie and X Zhang ldquoVisible lightdynamic positioning method using improved camshift-kal-man algorithmrdquo IEEE Photonics Journal vol 11 no 6pp 1ndash22 2019

[25] C Xie ldquoe LED-ID detection and recognition method basedon visible light positioning using proximity methodrdquo IEEEPhotonics vol 10 no 2 p 7902116 2018

[26] G Weipeng S Wen L Liu and H Zhang ldquoHigh-precisionindoor positioning algorithm based on visible light com-munication using complementary metal-ndashoxidendashsemiconductor image sensorrdquo Optical Engineeringvol 58 p 1 2019

International Journal of Optics 11

Page 10: High-Accuracy and Real-Time Indoor Positioning System ...

4 Conclusion

In this paper we design a high-accuracy and real-timeVLP-based robot system on ROS and implement theindoor VLP system based on TurtleBot3 robot platform inthe office as a practical application scenario which isequipped with designed LED lamps having VLC andBluetooth control function e implementation of thesystem shows that the positioning accuracy of the VLP-based system can reach cm-level position accuracy of214 cm and have a good real-time performance that cansupport the moving speed up to 20 kmh which has agreat research prospect in the autonomous positioningand navigation of indoor robots In the future we willapply the VLP-based system to the fusion localization orrobot navigation of ROS making full use of the advan-tages of VLP while making up for the defects of visiblelight positioning

Data Availability

e source code of our finished article is not open to theoutside world and the experimental data have been reflectedin the Experimental Analysis section

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Xianmin Li contributed to the conception of the studyperformed the data analysis and helped perform the analysiswith constructive discussions Zihong Yan contributedsignificantly to the analysis and manuscript preparation andwrote the manuscript Linyi Huang performed the experi-ment and analysis with constructive discussions and wrotethe manuscript Shihuan Chen helped revise the manuscriptManxi Liu helped perform the analysis with constructivediscussions Xianmin Li Zihong Yan and Linyi Huangcontributed equally to this work

References

[1] W Guan S Chen S Wen Z Tan H Song and W HouldquoHigh-accuracy robot indoor localization scheme based onrobot operating system using visible light positioningrdquo IEEEPhotonics Journal vol 12 no 2 pp 1ndash16 2020

[2] J Civera A J Davison and J Montiel ldquoInverse depth pa-rametrization for monocular slamrdquo IEEE Transactions onRobotics vol 24 no 5 p 932 2008

[3] M Liu R Siegwart and D P Fact ldquoTowards topologicalmapping and scene recognition with color for omnidirec-tional camerardquo in Proceedings of the International Conferenceon Robotics and Automation (ICRA) Saint Paul MN USAMay 2012

[4] P Henry M Krainin E Herbst X Ren and D Fox ldquoRGB-Dmapping using depth cameras for dense 3D modeling ofindoor environmentsrdquo in Proceedings of the 12th Interna-tional Symposium on Experimental Robotics ISER 2010 NewDelhi India December 2010

[5] M Liu F Pomerleau F Colas and R Siegwart ldquoNormalestimation for pointcloud using GPU based sparse tensorvotingrdquo in Proceedings of the IEEE International Conferenceon Robotics and Biomimetics (ROBIO) Shenzhen ChinaDecember 2013

[6] R Miyagusuku A Yamashita and H Asama ldquoData infor-mation fusion from multiple access points for WiFi-basedself-localizationrdquo IEEE Robotics and Automation Lettersvol 4 no 2 pp 269ndash276 2019

[7] H Chen W Guan S Li and Y Wu ldquoIndoor high precisionthree-dimensional positioning system based on visible lightcommunication using modified genetic algorithmrdquo OpticsCommunications vol 413 pp 103ndash120 2018

[8] W Guan ldquoA novel three-dimensional indoor positioningalgorithm design based on visible light communicationrdquoOptics Communication vol 392 pp 282ndash293 2017

[9] S Juneja and S Vashisth ldquoIndoor positioning system usingvisible light communicationrdquo in Proceedings of the 2017 In-ternational Conference on Computing and CommunicationTechnologies for Smart Nation (IC3TSN) Gurgaon IndiaOctober 2017

[10] S Chen W Guan Z Tan et al ldquoHigh accuracy and erroranalysis of indoor visible light positioning algorithm based onimage sensorrdquo 2019 httparxivorgabs191111773

[11] L Bai Y Yang C Feng et al ldquoAn enhanced camera assistedreceived signal strength ratio algorithm for indoor visible lightpositioningrdquo in Proceedings of the 2020 IEEE InternationalConference On Communications Workshops (Icc Workshops)Dublin Ireland June 2020

[12] A H A Bakar T Glass H Y Tee F Alam and M LeggldquoAccurate visible light positioning using multiple photodiodereceiver and machine learningrdquo IEEE Transactions on In-strumentation and Measurement 2020

[13] L Li P Hu C Peng G Shen and F Zhao ldquoEpsilon a visiblelight based positioning systemrdquo in Proceedings of the 11thUSENIX Conference on Networked Systems Design andImplementation Berkeley CA USA April 2014

[14] N Rajagopal P Lazik and A Rowe ldquoVisual light landmarksfor mobile devicesrdquo in Proceedings of the 13th InternationalSymposium Information Processing in Sensor Networks BerlinGermany April 2014

[15] Y-S Kuo P Pannuto K-J Hsiao and P Dutta ldquoLuxaposeindoor positioning with mobile phones and visible lightrdquo inProceedings of the 20th Annual International Conference onHigh Performance Computing Goa India December 2014

[16] Z Yang Z Wang J Zhang C Huang and Q ZhangldquoWearables can afford light-weight indoor positioning withvisible lightrdquo in Proceedings of the 13th Annual InternationalConference on Mobile Systems Applications and ServicesFlorence Italy May 2015

[17] J Fang Z Yang S Long et al ldquoHigh-speed indoor navigationsystem based on visible light and mobile phonerdquo IEEEPhotonics Journal vol 9 no 2 pp 1ndash11 2017

[18] M S Rahman M M Haque and K D Kim ldquoHigh precisionindoor positioning using lighting LED and image sensorrdquo inProceedings of the International Conference on Computer andInformation Technology Dhaka Bangladesh December 2011

[19] Q Liang J Lin and M Liu ldquoTowards robust visible lightpositioning under LED shortage by visual-inertial fusionrdquo inProceedings of the 2019 International Conference on IndoorPositioning and Indoor Navigation (IPIN) Pisa Italy October2019

10 International Journal of Optics

[20] Q Liang and M Liu ldquoA tightly coupled VLC-inertial lo-calization system by EKFrdquo IEEE Robotics and AutomationLetters vol 5 no 2 pp 3129ndash3136 2020

[21] W Guan Research on High Precision Indoor Visible LightLocalization Algorithm Based on Image sensor Masterrsquosesis South China University of Technology GuangdongProvince China 2019

[22] C Qiu B Hussain and C P Yue ldquoBluetooth based wirelesscontrol for iBeacon and VLC enabled lightingrdquo in Proceedingsof the 2019 IEEE 8th Global Conference on Consumer Elec-tronics (GCCE) Osaka Japan October 2019

[23] B Hussain C Qiu and C P Yue ldquoSmart lighting control andservices using visible light communication and Bluetoothrdquo inProceedings of the 2019 IEEE 8th Global Conference onConsumer Electronics (GCCE) Osaka Japan October 2019

[24] W Guan Z Liu S Wen H Xie and X Zhang ldquoVisible lightdynamic positioning method using improved camshift-kal-man algorithmrdquo IEEE Photonics Journal vol 11 no 6pp 1ndash22 2019

[25] C Xie ldquoe LED-ID detection and recognition method basedon visible light positioning using proximity methodrdquo IEEEPhotonics vol 10 no 2 p 7902116 2018

[26] G Weipeng S Wen L Liu and H Zhang ldquoHigh-precisionindoor positioning algorithm based on visible light com-munication using complementary metal-ndashoxidendashsemiconductor image sensorrdquo Optical Engineeringvol 58 p 1 2019

International Journal of Optics 11

Page 11: High-Accuracy and Real-Time Indoor Positioning System ...

[20] Q Liang and M Liu ldquoA tightly coupled VLC-inertial lo-calization system by EKFrdquo IEEE Robotics and AutomationLetters vol 5 no 2 pp 3129ndash3136 2020

[21] W Guan Research on High Precision Indoor Visible LightLocalization Algorithm Based on Image sensor Masterrsquosesis South China University of Technology GuangdongProvince China 2019

[22] C Qiu B Hussain and C P Yue ldquoBluetooth based wirelesscontrol for iBeacon and VLC enabled lightingrdquo in Proceedingsof the 2019 IEEE 8th Global Conference on Consumer Elec-tronics (GCCE) Osaka Japan October 2019

[23] B Hussain C Qiu and C P Yue ldquoSmart lighting control andservices using visible light communication and Bluetoothrdquo inProceedings of the 2019 IEEE 8th Global Conference onConsumer Electronics (GCCE) Osaka Japan October 2019

[24] W Guan Z Liu S Wen H Xie and X Zhang ldquoVisible lightdynamic positioning method using improved camshift-kal-man algorithmrdquo IEEE Photonics Journal vol 11 no 6pp 1ndash22 2019

[25] C Xie ldquoe LED-ID detection and recognition method basedon visible light positioning using proximity methodrdquo IEEEPhotonics vol 10 no 2 p 7902116 2018

[26] G Weipeng S Wen L Liu and H Zhang ldquoHigh-precisionindoor positioning algorithm based on visible light com-munication using complementary metal-ndashoxidendashsemiconductor image sensorrdquo Optical Engineeringvol 58 p 1 2019

International Journal of Optics 11