Response of Vulnerable Road Users to Visual Information ... · Response of Vulnerable Road Users to...

6
Response of Vulnerable Road Users to Visual Information from Autonomous Vehicles in Shared Spaces Walter Morales Alvarez 1 Student Member, IEEE, Miguel ´ Angel de Miguel 2 , Fernando Garc´ ıa 2 Member, IEEE and Cristina Olaverri-Monreal 1 Senior Member, IEEE c 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. DOI: 10.1109/ITSC.2019.8917501 Abstract— Completely unmanned autonomous vehicles have been anticipated for a while. Initially, these are expected to drive only under certain conditions on some roads, and advanced functionality is required to cope with the ever-increasing challenges of safety. To enhance the public’s perception of road safety and trust in new vehicular technologies, we investigate in this paper the effect of several interaction paradigms with vulnerable road users by developing and applying algorithms for the automatic analysis of pedestrian body language. We assess behavioral patterns and determine the impact of the coexistence of AVs and other road users on general road safety in a shared space for VRUs and vehicles. Results showed that the implementation of visual communication cues for interacting with VRUs is not necessarily required for a shared space in which informal traffic rules apply. I. I NTRODUCTION The arrival of driverless vehicles has been anticipated already for some time. Several aspects regarding their convenience and safety have been addressed, highlighting for example that the replacement of the human driver by automation will lead to more efficient driving patterns that result in the environmental benefits of decreased traffic congestion, as well as public safety improvements due to fewer traffic-related injuries [1]. Many vehicles are already equipped with the technology that enable self-driving automation, such as lane-keeping assistance and automated braking. In the near future highly autonomous, complex dynamical systems will be mature enough to implement intelligent autonomous vehicles (AV). Competition has been generated between the different automotive industries to develop their autonomous vehicles at ever higher levels of automation and to commercialize them. BMW showed an autonomous concept car at CES 2016 and announced its initiative to include the automation of their vehicles as part of their iNEXT project [2]. General Motors launched in 2018 the Cadillac CT6, its first autonomous vehicle level 2, which possesses a hands-free driving system called Super Cruise [3]. At the same time, Renault developed its own autonomous concept vehicle, the EZ-GO presented at the Geneva Motor Show in 2018 [4]. They also announced that in 2020 they would present a fleet of vehicles equipped 1 Johannes Kepler University Linz, Austria; Chair for Sustainable Transport Logistics 4.0. {walter.morales alvarez, cristina.olaverri-monreal}@jku.at 2 Universidad Carlos III de Madrid, Spain. {mimiguel, fegarcia}@ing.uc3m.es with a significant amount of autonomous functionality. Fi- nally, Nissan introduced its Serena model, boasting a one- lane autonomous driving system called proPILOT and which is being featured in Japan [5]. Despite the multiple advances that have taken place all over the world that bring society closer to self-driving vehicles, there is still not a car on the road that is completely autonomous. Most are concepts and prototypes of fully automated vehicles that drive in controlled environments (e.g.: universities) under certain conditions and on predetermined roads. Road safety is not only determined by the technology of the autonomous vehicles themselves, but rather a significant aspect of safety lies in the interaction between the automated vehicles and the other vulnerable road users (VRUs). Therefore it is crucial to assess patterns regarding com- plexity and risk by judging and anticipating the actions of the different actors in the system to determine the rules for their co-existence. In this context perceived trustworthiness of new vehicular technologies will be jeopardized if other users are not able to determine the authenticity of the information provided by the autonomous vehicle [6]. In this paper we aim at increasing road safety through approaches that augment awareness of the surrounding en- vironment for road users and the automation. We focus on VRUs, as they cannot make visual contact with a driver in a driverless vehicle and they must therefore turn to novel or unfamiliar ways of understanding the decisions made by the vehicle, if they are able to do so at all. To address this, we investigate interaction strategies by applying in field tests the algorithms for the automatic analysis of pedestrian body language presented in [7] to determine the impact of the coexistence of AVs and other road users on general road safety. To this end we focus on crossing behavior that is relatively close and directly in front of the AV, as it is relevant for safety and proves the pedestrians trust the technology. We therefore define the following research question: are pedestrians more likely (unnecessarily) to pause or stop and yield the roadway to a driverless vehicle when the vehicle did not signal or indicate to pedestrians that they had been seen? and form the following null hypothesis: H0: There is no relationship between measured pedestrian crossing behavior and driverless vehicle communication sig- nals. We performed the field tests in shared spaces in which a traditional safety infrastructure to guide VRU does not exist so that everyone is forced to become more alert and arXiv:2006.09090v3 [cs.HC] 22 Jul 2020

Transcript of Response of Vulnerable Road Users to Visual Information ... · Response of Vulnerable Road Users to...

Page 1: Response of Vulnerable Road Users to Visual Information ... · Response of Vulnerable Road Users to Visual Information from Autonomous Vehicles in Shared Spaces Walter Morales Alvarez1

Response of Vulnerable Road Users to Visual Information fromAutonomous Vehicles in Shared Spaces

Walter Morales Alvarez1 Student Member, IEEE, Miguel Angel de Miguel2, Fernando Garcıa2 Member, IEEEand Cristina Olaverri-Monreal1 Senior Member, IEEE

c©2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to serversor lists, or reuse of any copyrighted component of this work in other works. DOI: 10.1109/ITSC.2019.8917501

Abstract— Completely unmanned autonomous vehicles havebeen anticipated for a while. Initially, these are expected to driveonly under certain conditions on some roads, and advancedfunctionality is required to cope with the ever-increasingchallenges of safety. To enhance the public’s perception of roadsafety and trust in new vehicular technologies, we investigatein this paper the effect of several interaction paradigms withvulnerable road users by developing and applying algorithmsfor the automatic analysis of pedestrian body language. Weassess behavioral patterns and determine the impact of thecoexistence of AVs and other road users on general road safetyin a shared space for VRUs and vehicles. Results showedthat the implementation of visual communication cues forinteracting with VRUs is not necessarily required for a sharedspace in which informal traffic rules apply.

I. INTRODUCTION

The arrival of driverless vehicles has been anticipatedalready for some time. Several aspects regarding theirconvenience and safety have been addressed, highlightingfor example that the replacement of the human driver byautomation will lead to more efficient driving patterns thatresult in the environmental benefits of decreased trafficcongestion, as well as public safety improvements due tofewer traffic-related injuries [1]. Many vehicles are alreadyequipped with the technology that enable self-drivingautomation, such as lane-keeping assistance and automatedbraking. In the near future highly autonomous, complexdynamical systems will be mature enough to implementintelligent autonomous vehicles (AV).

Competition has been generated between the differentautomotive industries to develop their autonomous vehicles atever higher levels of automation and to commercialize them.BMW showed an autonomous concept car at CES 2016 andannounced its initiative to include the automation of theirvehicles as part of their iNEXT project [2]. General Motorslaunched in 2018 the Cadillac CT6, its first autonomousvehicle level 2, which possesses a hands-free driving systemcalled Super Cruise [3]. At the same time, Renault developedits own autonomous concept vehicle, the EZ-GO presentedat the Geneva Motor Show in 2018 [4]. They also announcedthat in 2020 they would present a fleet of vehicles equipped

1 Johannes Kepler University Linz, Austria; Chair forSustainable Transport Logistics 4.0. walter.morales alvarez,[email protected]

2 Universidad Carlos III de Madrid, Spain. mimiguel,[email protected]

with a significant amount of autonomous functionality. Fi-nally, Nissan introduced its Serena model, boasting a one-lane autonomous driving system called proPILOT and whichis being featured in Japan [5]. Despite the multiple advancesthat have taken place all over the world that bring societycloser to self-driving vehicles, there is still not a car on theroad that is completely autonomous. Most are concepts andprototypes of fully automated vehicles that drive in controlledenvironments (e.g.: universities) under certain conditions andon predetermined roads.

Road safety is not only determined by the technology ofthe autonomous vehicles themselves, but rather a significantaspect of safety lies in the interaction between the automatedvehicles and the other vulnerable road users (VRUs).

Therefore it is crucial to assess patterns regarding com-plexity and risk by judging and anticipating the actions of thedifferent actors in the system to determine the rules for theirco-existence. In this context perceived trustworthiness of newvehicular technologies will be jeopardized if other users arenot able to determine the authenticity of the informationprovided by the autonomous vehicle [6].

In this paper we aim at increasing road safety throughapproaches that augment awareness of the surrounding en-vironment for road users and the automation. We focus onVRUs, as they cannot make visual contact with a driver ina driverless vehicle and they must therefore turn to novel orunfamiliar ways of understanding the decisions made by thevehicle, if they are able to do so at all.

To address this, we investigate interaction strategies byapplying in field tests the algorithms for the automaticanalysis of pedestrian body language presented in [7] todetermine the impact of the coexistence of AVs and otherroad users on general road safety. To this end we focuson crossing behavior that is relatively close and directly infront of the AV, as it is relevant for safety and proves thepedestrians trust the technology.We therefore define the following research question:are pedestrians more likely (unnecessarily) to pause or stopand yield the roadway to a driverless vehicle when thevehicle did not signal or indicate to pedestrians that theyhad been seen? and form the following null hypothesis:H0: There is no relationship between measured pedestriancrossing behavior and driverless vehicle communication sig-nals.We performed the field tests in shared spaces in whicha traditional safety infrastructure to guide VRU does notexist so that everyone is forced to become more alert and

arX

iv:2

006.

0909

0v3

[cs

.HC

] 2

2 Ju

l 202

0

Page 2: Response of Vulnerable Road Users to Visual Information ... · Response of Vulnerable Road Users to Visual Information from Autonomous Vehicles in Shared Spaces Walter Morales Alvarez1

ultimately more cooperative [8]. This scenario is applicablefor example in the “last mile” with automatic delivery robots.

The remainder of the paper is organized as follows: nextsection describes related work in the field; section III detailsthe description of the field test. Section IV describes themodules that acquire the pedestrian’s data which allows aquantitative study of the interaction with the autonomousvehicles; section V presents the method to assess the datacollected; section VI presents the obtained results; and,finally, section VII discusses and concludes the work.

II. RELATED WORK

As previously mentioned, communication protocols thatmake the interaction of driverless vehicles and VRUs pos-sible are necessary to foster trust in the automation [9].A lot of literature has been dedicated to the study andinterpretation of pedestrian behavior as they interact withvehicles. For example, the authors in [10] identified theparameters that affect pedestrians at crosswalks in orderto predict their intentions, addressing this issue from twodistinct perspectives, the pedestrian’s and driver’s.

Within this study the authors concluded that interactionswere based on a given vehicle’s distance rather than TimeTo Collision (TTC), corroborating the results presented in[11]. The results from Hamaoka et.al [12] showed that thereare several physical locations on the street where pedestriansseek to confirm the proximity of a vehicle to ensure theirsafety. An indicator of this was the frequency of head turningbeing higher at the edges and middle of crosswalks. Althoughthe previous studies quantitatively established the main pa-rameters governing pedestrian decisions when crossing, theywere based on conventional traffic situations with mannedvehicles.

In recent studies focusing on the interaction betweenautonomous vehicles and pedestrians, it was shown thatpeople felt more comfortable crossing the street when a formof response from the side of the vehicle was presented (e.geye contact) similar to the interaction that occurs with manualdriven vehicles [13] and [14]. In the same line of researcha survey by the League of American Bicyclists concludedthat the inability of pedestrians and cyclists to communicateand make eye contact with a driverless vehicle increasedperceived risk [15].

The authors in [16] measured the importance of usingcommunication interfaces between the pedestrian and theautonomous vehicle. For their purposes they used a remote-controlled golf cart with an LED word display that explicitlyindicated when pedestrians should cross the road in front ofthem, and they developed a simulator to test human behaviorin this particular situation/setting. Basing their results on aqualitative data collection method, they showed that trust inthe technology is dependent on prior knowledge about AVand the distance between both pedestrian and vehicle.

Further, different early-stage display concepts for inter-faces were evaluated by means of crowdsourcing in [17]and more advanced interfaces and communication protocolshave been tested in [18] by using, for example, images that

follow pedestrians [19] or implicit forms of communicationthat included vehicle motion patterns such as breaking [20].Important groundwork for our line of research was laidin [21], which identified factors that potentially influencedthe perception of a road situation as safe in an environment inwhich vehicles operated with full driving automation (level5) in a public space. The analysis of recorded videos andsubjective qualitative data established that there were severallevels of trust, uncertainty and a certain degree of fear amongparticipants. However, the existence of a communicationsystem to support the interaction with the driverless vehicleswas evaluated as positive.

Although previous studies showed that adding an externalmonitor or screen to an autonomous vehicle helped VRUto gather relevant information to properly identify the roadsituation and make the right choices [21], there are studiessuch as [22] that showed that the patterns in the previousstudies are not decisive in defining a behavior in pedestrians.Moreover, in [23] and [24] the authors concluded thatpeople’s reactions and behavior are determined in greaterpart by the distance and speed of the vehicle than on theinterface presented by the vehicles [25].

All the previous studies focused on defining the main fac-tors that influenced pedestrian crossing behavior. However,most part of them relied on qualitative data and in the studiesthat were based on quantitative data, a Wizard of Oz or OZparadigm was used to mimic the behavior of the intelligentvehicle. We contribute to the state of the art by presentingin this work quantitative data using an unmanned vehicle ina shared space as explained in the next section.

III. FIELD TEST DESCRIPTION

In order to obtain behavioral patterns of different individ-uals in the environment, we applied the algorithms describedin [7] and identified poses adopted by pedestrians in theurban environment when they were exposed to the presenceof a driving AV. The environment consisted of a shared spacein which segregation of VRUs and vehicles was minimized.In such a scenario, traffic relies more on the informal rulesof foot traffic.

The selected scenario was the campus of the UniversityCarlos III in Madrid. The campus contains several greenspaces that are connected to the village of Leganes andconsequently the pedestrians were residents of the area aswell as students from the university. In this scenario theIntelligent Campus Automobile (iCab) autonomous vehicle(see [26]) passed multiple times along a predefined routeof 30 meters through a perpendicular flow of pedestrians,creating many opportunities for them to cross in front of theAV. However, the flow of pedestrians could move in multipledirections, such that crossing in front of the AV was notabsolutely essential.

The vehicle was equipped with a external Human Ma-chine Interface (HMI) that conveyed several messages to thepedestrians to indicate whether they had been detected (seeFigure 1). During the experiment, vehicle sensors acquired

Page 3: Response of Vulnerable Road Users to Visual Information ... · Response of Vulnerable Road Users to Visual Information from Autonomous Vehicles in Shared Spaces Walter Morales Alvarez1

(a)

(b)

Fig. 1: (a) Autonomous vehicle displaying closed eyes toindicate that VRU have not been detected. (b) Open eyeson the display indicating detection of VRU. The imagedisplayed by the AV is depicted on the figure’s lower rightcorner.

the corresponding data necessary to analyze pedestrian be-havior.

The experiments were conducted for two days wherepedestrians were continuously exposed to the iCab and thedata was recorded for further processing and analysis. Fromthese tests, material of 36 videos was obtained with 135pedestrians interacting with the vehicle. In order to mimicreal road conditions as much as possible, pedestrians werenot aware of the data collection. To ensure the safety of thepedestrians in case of a failure, a remote control in a fixedlocation off-site made it possible to stop the vehicle in anemergency situation.

The following parameters were set to determine roadsafety as well as response to displayed messages:

• distance between pedestrians and AV• vehicle’s speed along its path• head and body poseUsing all of the above data we could obtain pedestrian

behavioral patterns indicated by their pose and distanceto the AV, as well as road safety-related information suchas the TTC (calculated using vehicle speed and pedestriancoordinates). This information was analyzed according to

the corresponding image that was displayed on the vehicleinterface.

The design of the HMI relied on the description in [21].It was developed in C++ and integrated with the RobotOperating System (ROS) into the vehicle’s operating system.The vehicle detected pedestrians in the proximity, takinginto account the degree of rotation of the vehicle, andthen activated different displays depending on whether ithad detected the pedestrian or not. The algorithms weretrained to analyze eye contact, facial expression, and headpose to determine the crossing behavior depending on themessage conveyed. To this end the following experimentswere performed:

A. Baseline Condition

A performance baseline in which no message was dis-played was established to quantify changes in pedestrianbehavior.

B. Red-Green Sign

Inspired by traditional traffic light color-coding, a redscreen indicated to pedestrians that it was not safe to crossand a green one signalled that crossing was safe.

C. Open-Closed Eyes

An additional set of images mimicked driver behavior asa strategy to ensure that the VRU understood the decisionsmade by the vehicle. The display showed a pair of open eyesindicating that the pedestrian had been detected and couldcross, or a pair of closed eyes indicating that the vehicle hadnot noticed the pedestrian.

IV. ALGORITHMS IMPLEMENTATION

A. Pose Identification

Relying on the approach presented in [7], the specificpose of a pedestrian was identified using the OpenPose opensource library developed by CMU-Panoptic labs [27],[28],[29], which designed and trained a feedback convolutionalneural network that determined key points of individual posesin an RGB image and rendered the poses as seen in Figure 2.The neural network is in charge of calculating the heatmapswhere the keypoints of the pose are most likely to befound, and it connects them using the Part Affinity Fields(PAFs) feature that preserves the location and orientation ofpeople’s joints. Using the cameras presented in the vehicleand implementing the OpenPose library, it is possible toobtain up to 25 pedestrian pose keypoints and 26 facialposition keypoints to determine behavioral patterns.

B. Distance Estimation

A further crucial parameter for estimating road safety isthe pedestrian’s distance from the approaching vehicle at thetime of crossing. Thanks to a 2D laser that is integratedin the autonomous vehicle, it is possible to acquire thedistances between the pedestrians and the AV. As in theprevious modules, the acquisition of laser’s data is obtainedthrough ROS publishing an acquisition node in a certain topic

Page 4: Response of Vulnerable Road Users to Visual Information ... · Response of Vulnerable Road Users to Visual Information from Autonomous Vehicles in Shared Spaces Walter Morales Alvarez1

Fig. 2: Pose (color points) and face (white points) calculatedby OpenPose library.

Fig. 3: Left: Image acquired by the stereoscopic camera onthe autonomous vehicle. The reference axis perpendicular tothe picture plane corresponds to the red axis in the imageon the right. Right: Representation of VRUs as a series ofdistance points on the RViz ROS visualization widget. Thereference axes are located in the center of the image, whoseplane is parallel to the vehicle’s plane of movement.

(/icab/scan) the distances in meters of the nearby objects.The data is obtained as a series of points that can be observedusing the RViz visualization package as shown in Figure 3.Using the RVIZ tools and analyzing the points correspondingto pedestrians, it is possible to determine their distance at thetime of interaction with the vehicle.

C. Velocity

The autonomous vehicle was equipped with wheel opticalencoders with which the speed of the vehicle could beobtained, taking into account the physical dimensions ofthe automobile’s wheels. The ROS package installed in theAutonomous Vehicle (AV) made it possible to publish thespeed of the vehicle at all times through the pertinent topic(e.g. /icab1/velocity absolute).

V. DATA ACQUISITION AND ANALYSIS FOR PEDESTRIANBEHAVIOR

To test the defined hypothesis we acquired the requireddata through the algorithms described in section IV asfollows:

Head and body poses to determine behavioral patternsdepending on the message displayed were identified bythe autonomous vehicle. As described in section III theexperiment conditions were as follows: baseline, red-greensign or open-closed eyes. Two categories were created basedon these data:

1) VRU that saw the message inside the car and changedtheir behavior (e.g. stopped for a moment).

2) VRU that saw the message and continued without anychange (e.g. kept walking).

To test the relationships between the categorical variableswe performed a Pearson χ2 test. Further, we determined thedistance between the iCab and the pedestrians. Based on thisdistance we also calculated the TTC based on the velocityof the vehicle. Statistical significance of the relationship wastested through a unpaired t-test.

The data corresponding to the persons who did not seethe vehicle was additionally analyzed. As it is known,eye contact plays a critical role at unmarked intersections,as integrating glances facilitates cooperative action whileavoiding eye contact is a way of dominating the other inan interaction [30] cited in [31]. Finally, as in [32], a poweranalysis was performed to measure the effectiveness of thet-test of rejecting the null hypothesis by calculating theprobability of not committing an error of type II (1-β) or,in other words, the probability of falsely rejecting the nullhypothesis.

VI. RESULTS

From the extracted information we could derive that 92pedestrians (68.14%) looked at the screen that was displayingthe images and 43 (31.86%) didn’t even look at the vehicle.

From the pedestrians that looked at the screen it couldbe observed a greater percentage of pedestrians that crossedin front of the AV, independently of the message displayed.From the results presented in Table I the distributions ofthe categorical variables differed from one another beingthe differences in the proportion of pedestrians who walkedor stopped when the screen showed red or closed eyes notstatistically significant. Therefore, we fail to reject the nullhypothesis.

As for the data related to the distance and TTC, resultsfrom the power analysis for independent samples from thet-test ranged from 73% to 97%, indicating that there is alow probability of having a type II error and erroneouslyaccepting the null hypothesis testing the parameters. Table IIshows the obtained values. The analysis indicated that thedistance to the vehicle in the moment of crossing was lowerunder the baseline condition. It also shows that the TTCwas lower when the red/green color- coded message and themessage showing open/closed eyes was displayed. Howeverthese values did not differ significantly between participants.

Regarding the persons who did not see the vehicle, resultsdetermined by TTC and distance to the vehicle showedthat the effect on road safety of the lack of eye contact atunmarked intersections was not significant (Table III).

Page 5: Response of Vulnerable Road Users to Visual Information ... · Response of Vulnerable Road Users to Visual Information from Autonomous Vehicles in Shared Spaces Walter Morales Alvarez1

TABLE I: Pedestrian behavior depending on the systemdisplay condition

Baseline Greencolor

Openeyes

Redcolor

Closedeyes

Walking 17 9 11 25 21Stand 3 2 1 1 2χ2 test (α =0.05)

Baseline vs.Green color

Baseline vs.Open eyes

Baseline vs. Redcolor

(1,N=31) p (1,N=32) p (1,N=46) p1.99 0.158 0.49 0.484 1.77 0.183Green color vs.Open eyes

Green color vs.Red color

Red color vs.Closed eyes

(1,N=23) p (1,N=27) p (1,N=49) p0.49 0.484 2.13 0.144 0.50 0.480

Open eyesvs. Red color

Open eyes vs.Closed eyes

Baseline vs.Closed eyes

Green color vs.Closed eyes

(1,N=38) p (1,N=35) p (1,N=43) p (1,N=34) p0.33 0.57 0.01 0.974 0.41 0.522 0.65 0.420

TABLE II: Pedestrian distance to the vehicle as well as TTCwhile crossing depending on the kind of display showed

Metric Baseline Red/green color Opened/closedeyes

Mean SD Mean SD Mean SDDistance(m) 6.14 3.56 7.38 3.48 6.88 2.76TTC (s) 7.31 4.64 4.9 6.23 5.10 7.91T-Test (α =0.05)Metric Baseline vs. Red/-

green colorBaseline vs.Opened/Closedeyes

Red/greencolor vs.Opened/Closedeyes

t(92) p t(92) p t(92) pDistance(m) 1.27 0.20 0.85 0.39 0.67 0.55TTC (s) 1.51 0.13 1.07 0.28 0.90 0.12

Finally, Figure 4 depicts the number of pedestrians thathad seen the vehicle and crossed in front of the AV con-sidering their distance to the vehicle, as well as the TTC inrelation to the kind of display showed. From this graphicwe can see that 69 pedestrians (71.7%) crossed at a distancebetween 5 and 9 meters. The TTC range being 2 to 8 seconds.

VII. CONCLUSION, DISCUSSION AND FUTURE WORK

Initially, it is necessary to note that the fact that lookingat the vehicle automatically guaranteed recognition of thedisplay and images on it was confirmed by pedestrian par-ticipant comments such as “the car is looking at you”. Thisis important because the study is based on the images thatthe vehicle showed to pedestrians.

The results reported in this paper did not show statisticallysignificant differences in the proportion of pedestrians whocontinued walking and crossed in front of the AV to thosewho stopped depending on the display. Therefore, we fail toreject the null hypothesis.

Moreover, it was observed in most cases that pedestrianscrossed even when the message was red or displayed closed

TABLE III: Effect of eye contact on interaction with the AV

T-test (α=0.05)Metric Without eye

contactEye contact T-Test(α = 0.05)

Mean SD Mean SD t(133) pDistance (m) 6.93 3.28 7.81 3.56 1.41 0.1599TTC (m/s) 5.87 6.71 8.93 12.22 1.37 0.1726

(a)

(b)

Fig. 4: Pedestrian distance to the vehicle (a) and TTC (b)while crossing depending on display conditions.

eyes. Apparently, the detection of the vehicle on the part ofthe pedestrians was sufficient to make the decision to crossor not. Therefore, the research question formulated in thebeginning: “Pedestrians are more likely to pause and refrainfrom crossing in front of an AV” could not be confirmed.Furthermore, the kind of display did not affect the distanceat which pedestrians crossed in front of the AV and theTTC. This was probably because the vehicle was slow, neverexceeding 5 m/s, as people are less likely to respond to alow-speed moving AV, which is not dangerous to them. Asdescribed in the section II previous works have shown theimportance of the vehicle movement (e.g speed, distance) forpedestrians. However they based on simulations or subjectivedata, while this work describes the quantitative results of afield test performed with a driverless vehicle.

The relationship between the absence or presence of eyecontact on parameters related to road safety such as distanceand TTC was not significant. Therefore, it could not beconfirmed or disconfirmed whether eye contact with an AV inthe tested shared space scenario, in which traffic relies moreon the informal rules of foot traffic without traffic lights, roadmarkings or signs that indicate the right-of-way, facilitatedcooperative action.During the experiment, it could be observed that in mostcases people were distracted, using a cell phone or convers-ing. For safety reasons, in these cases the AV stopped, caus-ing the pedestrian’s curiosity. Interestingly, a high numberof pedestrians were first aware of the vehicle only when itstopped.

Page 6: Response of Vulnerable Road Users to Visual Information ... · Response of Vulnerable Road Users to Visual Information from Autonomous Vehicles in Shared Spaces Walter Morales Alvarez1

We can conclude that from the results obtained in section VI,the implementation of visual communication cues for inter-acting with VRUs is not necessarily required for a sharedspace in which informal traffic rules apply. They are morelikely to help when vehicle and pedestrian have potentialconflicts that cause danger. These results are in line with thefindings in [22], [23] and [24] that stated that informationshowed on external monitors was not determinant to definea behavior in pedestrians being distance and speed of thevehicle more decisive [25].

Therefore, and in line with the finding in [21], future workwill focus on other communication signs such as auditorycues. We will also use additional sensors for catalogingpedestrian behavior that rely on the reconstruction of 3Dpoints to determine, for example, the number of pedestrianswho crossed behind the vehicle, as the camera and laser usedwere only able to record situations that occurred in front ofthe vehicle.

ACKNOWLEDGMENT

This work was supported by the Austrian Ministry forTransport, Innovation and Technology (BMVIT) EndowedProfessorship for Sustainable Transport Logistics 4.0.

REFERENCES

[1] C. Olaverri-Monreal, “Autonomous vehicles and smart mobility relatedtechnologies,” Infocommunications Journal, vol. 8, no. 2, pp. 17–24,2016.

[2] The BMW Vision iNext. Future Focused. [Online]. Available:https://www.bmwgroup.com/BMW-Vision-iNEXT

[3] C. B. I. (Firm). 46 Corporations Working On Autonomous Vehicles.New York. [Online]. Available: https://www.cbinsights.com/research/autonomous-driverless-vehicles-corporations-list/

[4] Renault EZ-GO reveal at the 2018 Geneva Motor Show. [Online].Available: https://en.media.groupe.renault.com/

[5] ProPILOT. [Online]. Available: https://www.nissan-global.com/en/technology/overview/propilot.html

[6] A. Allamehzadeh and C. Olaverri-Monreal, “Automatic and manualdriving paradigms: Cost-efficient mobile application for the assessmentof driver inattentiveness and detection of road conditions,” in 2016IEEE Intelligent Vehicles Symposium (IV). IEEE, 2016, pp. 26–31.

[7] W. Morales-Alvarez, M. J. Gomez-Silva, G. Fernandez-Lopez,F. Garcıa-Fernandez, and C. Olaverri-Monreal, “Automatic Analysisof Pedestrian’s Body Language in the Interaction with AutonomousVehicles,” IEEE Intelligent Vehicles Symposium, Proceedings, vol.2018-June, no. Iv, pp. 1–6, 2018.

[8] E. Jaffe, “6 Places where cars bikes and pedestrians all share theroad as equals,” 2015. [Online]. Available: shorturl.at/mMRW2

[9] A. Hussein, F. Garcia, J. M. Armingol, and C. Olaverri-Monreal, “P2Vand V2P communication for Pedestrian warning on the basis of Au-tonomous Vehicles,” in IEEE International Conference on IntelligentTransportation Systems (ITSC2016). IEEE, 2016, pp. 2034–2039.

[10] S. Schmidt and B. Farber, “Pedestrians at the kerb Recognising theaction intentions of humans,” Transportation Research Part F: TrafficPsychology and Behaviour, vol. 12, no. 4, pp. 300–310, jul 2009.[Online]. Available: https://www.sciencedirect.com/science/article/pii/S1369847809000102

[11] J. A. Oxley, E. Ihsen, B. N. Fildes, J. L. Charlton, and R. H. Day,“Crossing roads safely: An experimental study of age differencesin gap selection by pedestrians,” Accident Analysis & Prevention,vol. 37, no. 5, pp. 962–971, sep 2005. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0001457505000795

[12] H. Hamaoka, T. Hagiwara, M. Tada, and K. Munehiro, “A Study onthe behavior of pedestrians when confirming approach of right/left-turning vehicle while crossing a crosswalk,” IEEE Intelligent VehiclesSymposium, Proceedings, vol. 10, no. 2011, pp. 99–103, 2013.

[13] T. Lagstrom and V. M. Lundgren, “AVIP-Autonomous vehi-clesınteraction with pedestrians,” Ph.D. dissertation, Chalmers Uni-versity of Technology.

[14] S. Yang, “Driver behavior impact on pedestrians’ crossingexperience in the conditionally autonomous driving context,”2017. [Online]. Available: http://kth.diva-portal.org/smash/record.jsf?pid=diva2%3A1169360&dswid=6775

[15] League of American Byciclist, “Autonomous and Connected Vehicles:Implications for Bicyclists and Pedestrians.”

[16] M. Matthews, G. V. Chowdhary, and E. Kieson, “IntentCommunication between Autonomous Vehicles and Pedestrians,”Tech. Rep. [Online]. Available: https://arxiv.org/pdf/1708.07123.pdf

[17] L. Fridman, B. Mehler, L. Xia, Y. Yang, L. Y. Facusse, and B. Reimer,“To walk or not to walk: Crowdsourced assessment of external vehicle-to-pedestrian displays,” arXiv preprint arXiv:1707.02698, 2017.

[18] K. Mahadevan, S. Somanath, and E. Sharlin, “CommunicatingAwareness and Intent in Autonomous Vehicle-Pedestrian Interaction,”in Proceedings of the 2018 CHI Conference on Human Factorsin Computing Systems - CHI ’18. New York, New York,USA: ACM Press, 2018, pp. 1–12. [Online]. Available: http://dl.acm.org/citation.cfm?doid=3173574.3174003

[19] C.-M. Chang, K. Toda, D. Sakamoto, and T. Igarashi, “Eyeson a Car: an Interface Design for Communication between anAutonomous Car and a Pedestrian,” 2017. [Online]. Available:https://doi.org/10.1145/3122986.3122989

[20] M. Beggiato, C. Witzlack, S. Springer, and J. Krems, “The RightMoment for Braking as Informal Communication Signal BetweenAutomated Vehicles and Pedestrians in Crossing Situations,” 2018,pp. 1072–1081. [Online]. Available: http://link.springer.com/10.1007/978-3-319-60441-1 101

[21] M. De Miguel, D. Fuchshuber, A. Hussein, and C. Olaverri-Monreal,“Perceived Pedestrian Safety: Public Interaction with Driverless Ve-hicles,” in 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE,2019, pp. 1–6.

[22] D. Rothenbucher, J. Li, D. Sirkin, B. Mok, and W. Ju,“Ghost driver: A field study investigating the interaction betweenpedestrians and driverless vehicles,” in 2016 25th IEEE InternationalSymposium on Robot and Human Interactive Communication(RO-MAN). IEEE, aug 2016, pp. 795–802. [Online]. Available:http://ieeexplore.ieee.org/document/7745210/

[23] M. Clamann, M. Aubert, and M. L. Cummings, “Evaluation of vehicle-to-pedestrian communication displays for autonomous vehicles,” 2017.[Online]. Available: https://trid.trb.org/view.aspx?id=1437891

[24] A. Pillai, “School of Science Master’s Programme in ICT InnovationVirtual Reality based Study to Analyse Pedestrian Attitude towardsAutonomous Vehicles Virtual Reality based Study to Analyse Pedes-trian attitude towards Autonomous Vehicles,” 2017.

[25] A. Rasouli and J. K. Tsotsos, “Autonomous Vehicles That InteractWith Pedestrians: A Survey of Theory and Practice,” IEEETransactions on Intelligent Transportation Systems, pp. 1–19, 2019.[Online]. Available: https://ieeexplore.ieee.org/document/8667866/

[26] D. Gomez, P. Marın Plaza, A. Hussein, A. de la Escalera, and J. M.Armingol, “ROS-based Architecture for Autonomous Intelligent Cam-pus Automobile (iCab),” UNED Plasencia Revista de InvestigacionUniversitaria, vol. 12, pp. 257–272, 2016.

[27] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, “Realtime Multi-Person2D Pose Estimation using Part Affinity Fields,” in CVPR, 2017.

[28] Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh, “OpenPose:realtime multi-person 2D pose estimation using Part Affinity Fields,”in arXiv preprint arXiv:1812.08008.

[29] S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh, “Convolutionalpose machines,” in CVPR, 2016.

[30] T. C. Schelling, Choice and consequence. Harvard University Press,1984.

[31] T. Vanderbilt, Traffic Why we drive the way we do (and what it saysabout us). Vintage, 2009.

[32] C. Olaverri-Monreal, M. Gvozdic, and B. Muthurajan, “Effect ondriving performance of two visualization paradigms for rear-endcollision avoidance,” in 2017 IEEE 20th International Conference onIntelligent Transportation Systems (ITSC). IEEE, Oct 2017, pp. 77–82. [Online]. Available: http://ieeexplore.ieee.org/document/8317937/