Autonomous Vehicles Need Experimental Ethics

download Autonomous Vehicles Need Experimental Ethics

of 15

Transcript of Autonomous Vehicles Need Experimental Ethics

  • 7/24/2019 Autonomous Vehicles Need Experimental Ethics

    1/15

    Autonomous Vehicles Need Experimental Ethics:

    Are We Ready for Utilitarian Cars?

    Jean-Francois Bonnefon1, Azim Shariff2, and Iyad Rahwan3

    1enter for Research in Management, Toulouse School of Economics, Toulouse, 31000, France2Department of Psychology, 1277 University of Oregon, Eugene, OR 97403-1227, USA

    3Media Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA

    October 13, 2015

    Abstract

    The wide adoption of self-driving, Autonomous Vehicles (AVs) promises to dramatically reducethe number of traffic accidents. Some accidents, though, will be inevitable, because some situationswill require AVs to choose the lesser of two evils. For example, running over a pedestrian on theroad or a passer-by on the side; or choosing whether to run over a group of pedestrians or tosacrifice the passenger by driving into a wall. It is a formidable challenge to define the algorithmsthat will guide AVs confronted with such moral dilemmas. In particular, these moral algorithmswill need to accomplish three potentially incompatible objectives: being consistent, not causing

    public outrage, and not discouraging buyers. We argue to achieve these objectives, manufacturersand regulators will need psychologists to apply the methods of experimental ethics to situationsinvolving AVs and unavoidable harm. To illustrate our claim, we report three surveys showingthat laypersons are relatively comfortable with utilitarian AVs, programmed to minimize the deathtoll in case of unavoidable harm. We give special attention to whether an AV should save lives bysacrificing its owner, and provide insights into (i) the perceived morality of this self-sacrifice, (ii)the willingness to see this self-sacrifice being legally enforced, (iii) the expectations that AVs willbe programmed to self-sacrifice, and (iv) the willingness to buy self-sacrificing AVs.

    1 Introduction

    In 2007, a sequence of technical advancesenabled six teams to complete the DARPA

    Urban Challenge, the first benchmark testfor autonomous driving in realistic urbanenvironments [Montemerlo, B. et al, 2008,Urmson, C. et al, 2008]. Since then, majorresearch labs spearheaded efforts to build

    and test Autonomous Vehicles (AVs). TheGoogle Car, for example, has already succeededin covering thousands of miles of real-roaddriving [Waldrop, 2015]. AVs promise world-

    changing benefits by increasing traffic efficiency[Van Arem et al., 2006], reducing pollution[Spieser, K. et al, 2014], and eliminating upto 90% of traffic accidents [Gao et al., 2014].

    1

    arXiv:1510.0

    3346v1

    [cs.CY]1

    2Oct2015

  • 7/24/2019 Autonomous Vehicles Need Experimental Ethics

    2/15

    Not all accidents will be avoided, though, and

    accidents involving driverless cars will createthe need for new kinds of regulationespeciallyin cases where harm cannot be entirely avoided.

    Fig. 1 illustrates three situations in whichharm is unavoidable. In one case, an AV mayavoid harming several pedestrians by swervingand sacrificing a passer-by. In the other twocases, the AV is faced with the choice of sacrific-ing its own passenger, who can be its owner, inorder to save one or more pedestrians. The be-havior of AVs in these situations of unavoidable

    harm raises non-trivial questions, both legal andmoral, which have commanded public interestbut have not yet been empirically investigated.Indeed, while we could easily trace multiple pub-lic discussions of these situations, we have notidentified a single scientific article dealing withthe choices an AV must make in situations ofunavoidable harm.

    On the legal side, since the control algo-rithm ultimately makes the decision to hit apedestrian, passer-by, or wall, it is not obviouswhether the passenger should be held legally

    accountable for this outcome. Some arguethat liability must shift from the driver to themanufacturer, because failure to anticipatedecisions like those in Fig. 1 may amount tonegligence in design under Product Liabilitylaw [Villasenor, 2014]. In this case, car man-ufacturers may become the de facto insurersagainst accidents, or may rely on the expertiseof existing motor insurers [Jain et al., 2015].Today, facing regulation that lags behind tech-nology [UK Department for Transport, 2015],

    manufacturers of semi-autonomous vehicles areconsidering ad hoc workarounds. For example,to minimize manufacturer liability associatedwith their new automated overtaking feature,Tesla Motors will actually require the driver

    to initiate the feature, thus ensuring legal

    responsibility for the maneuvers consequencesfalls with the driver[Ramsey, 2015].

    In this context, defining the algorithms thatwill guide AVs in situations of unavoidable harmis a formidable challenge, given that the de-cision to run over pedestrians, to swerve intoa passer-by, or to self-destruct is a moral one[Marcus, 2012]. Indeed, in situations whereharm cannot entirely be avoided, it must be dis-tributed, and the distribution of harm is uni-versally considered to fall within the moral do-

    main [Haidt, 2012]. Accordingly, the control al-gorithms of AVs will need to embed moral prin-ciples guiding their decisions in situations of un-avoidable harm [Wallach and Allen, 2008]. Inthis respect, manufacturers and regulators willneed to accomplish three potentially incompati-ble objectives: being reasonably consistent, notcausing public outrage, and not discouragingbuyers.

    Not discouraging buyers is a commercialnecessitybut it is also in itself a moral impera-tive, given the social and safety benefits AVs pro-

    vide over conventional cars. Meanwhile, avoid-ing public outrage, that is, adopting moral algo-rithms that align with human moral attitudes, iskey to fostering public comfort with allowing thebroad use of AVs in the first place. However, topursue these two objectives simultaneously maylead to moral inconsistencies. Consider for ex-ample the case displayed in Fig. 1a, and assumethat the most common moral attitude is that thecar should swerve. This would fit a utilitarianmoral doctrine [Rosen, 2005], according to which

    the moral course of action is to minimize thedeath toll. But consider then the case displayedin Fig. 1c. The utilitarian course of action, inthat situation, would be for the car to swerve andkill its ownerbut a driverless car programmed

    2

  • 7/24/2019 Autonomous Vehicles Need Experimental Ethics

    3/15

    Figure 1: Three traffic situations involving imminent unavoidable harm. (a) The car can stay oncourse and kill several pedestrians, or swerve and kill one passer-by. (b) The car can stay on courseand kill one pedestrian, or swerve and kill its passenger. (c) The car can stay on course and killseveral pedestrians, or swerve and kill its passenger.

    to follow this course of action might discouragebuyers, who may consider that their own safetyshould trump other considerations. Even thoughsuch situations may be exceedingly rare, theiremotional saliency is likely to give them broadpublic exposure and a disproportionate weightin individual and public decisions about AVs.

    We suggest that a way out of this conundrumis to adopt a data-driven approach, inspired byexperimental ethics [Greene, 2014a], to identifymoral algorithms that people are willing to ac-cept as citizens and to be subjected to as carowners. Indeed, situations of unavoidable harms,

    as illustrated in Fig. 1, bear a striking resem-blance with the flagship dilemmas of experimen-tal ethicsthat is, the so called trolley prob-lems. In typical trolley problems, an agent hasthe opportunity to save several persons by sac-

    rificing the life of a single individual. This de-cision is commonly labeled as utilitarian, sinceit amounts to accepting the infliction of harm inthe name of the greater good.

    Trolley problems inspired a huge number ofexperiments, aimed at identifying both the an-tecedents and the cognitive processing lead-ing to a utilitarian decision [Greene, 2014c,Greene, 2014b]. The results of these experimentsdo not apply well to situations involving AVs,though. For example, some pressing questionsabout AVs do not make sense in typical trolleyproblems that involve a human decision-maker:

    Would people purchase AVs programmed to beutilitarian? What are their expectations aboutthe way car manufacturers will program AVs?Other questions could have been addressed in ahuman context, but were not because they were

    3

  • 7/24/2019 Autonomous Vehicles Need Experimental Ethics

    4/15

    only made salient in the context of AVs. For ex-

    ample, would people accept legal enforcement ofutilitarian decisions, and more so for AVs thanfor humans? And finally, perhaps the most strik-ing problem created by AVs, at least in the pub-lic opinion, is whether a car may decide to sac-rifice its passenger in order to save several liveson the road. How do people balance their moti-vations for self-preservation as drivers with theprotection of pedestrians and other drivers, whenconsidering whether to own an AV, when think-ing about how AV will or should be programmed,

    and when pondering which programming mightbe legally enforced?We believe that regulators and manufactur-

    ers will soon be in pressing need of answers tothese questions, and that answers are most likelyto come from surveys employing the protocols ofexperimental ethics. To be clear, we do not meanthat these thorny ethical questions can be solvedby polling the public and enforcing the majorityopinion. Survey data will inform the construc-tion and regulation of moral algorithms for AVs,not dictate them. The importance of survey data

    must not be underestimated, though, since algo-rithms that go against the moral expectations ofcitizens (or against the preferences or consumers)are likely to impair the smooth adoption of AVs.

    Having argued that autonomous vehicles needexperimental ethics, we offer to illustrate this ap-proach in the rest of this article, by reportinga series of three surveys built upon the proto-cols commonly used to study trolley problems,which we adapted to the new context of trafficaccidents involving AVs. In this first series of

    surveys, we focus on the question of whether re-spondents would approve of utilitarian AVs, pro-grammed to minimize the death toll of an acci-dent, even when it requires to sacrifice the carsown passenger.

    2 Materials and Methods

    We conducted three online surveys in June 2015.All studies were programmed on Qualtrics sur-vey software and recruited participants from theMechanical Turk platform, for a compensationof 25 cents. In all studies, participants providedbasic demographic information, such as age, sex,and religiosity. At the end of all studies, three7-point scales measured the overall enthusiasmthat participants felt for AVs: How excited theyfelt about a future in which autonomous self-driving cars are an everyday part of our motor-ing experience, how fearful they were of this fu-ture (reverse coded), and how likely they wereto buy an autonomous vehicle, should they be-come commercially available by the time theywould next purchase a new car. Cronbach alphafor these three items was .82 in Study 1, .78 inStudy 2, and .81 in Study 3. Regression anal-yses (shown in the Supplementary Information)showed that enthusiasm was significantly greaterfor male participants, in all studies; and thatenthusiasm was significantly greater for younger

    and less religious participants in two studies, al-though these two effects were smaller than thatof sex. Given these results, all subsequent anal-yses controlled for sex, age and religiosity.

    2.1 Study 1

    Participants (241 men and 161 women, medianage = 30) read a vignette in which one or morepedestrians could be saved if a car were toswerve into a barrier, killing its passenger. Par-

    ticipants were randomly assigned to one groupof a 222 between-participant design, whichmanipulated the number of pedestrians whichcould be saved (one vs ten), whether the driveror the self-driving car would make the decision

    4

  • 7/24/2019 Autonomous Vehicles Need Experimental Ethics

    5/15

    to swerve, and whether participants were in-

    structed to imagine themselves in the car, orto imagine an anonymous character. The ex-periment was programmed so that the sex ofthis anonymous character always matched thatof the participant (this information being col-lected early in the experiment). In addition tothe vignette, participants were given a pictorialrepresentation of the swerveand stayoptions.See the Supplementary Information for detailedexamples of the vignettes and their pictorial rep-resentation.

    2.2 Study 2

    Participants (144 men and 166 women, medianage = 31) were randomly assigned to one of fourversions of a vignette, in which either 1 or 10pedestrians could be saved if an autonomous,self-driving car were to swerve into either a bar-rier (killing its passenger), or another pedestrian(killing that pedestrian). Participants were toldthat three algorithms could be implemented inthe car for such a contingency: always stay,always swerve, and random. Additionally,participants were given a pictorial representa-tion of the swerve and stay options. Partici-pants then answered three questions, introducedin random order for each participant: How wouldyou rate the relative morality of these three algo-rithms? How would you rate your relative will-ingness to buy a car with one of these three al-gorithms? How would you rate the relative com-fort with having each of the car with one of thesethree algorithms on the roads in your part of the

    world? To provide their responses, participantsassigned points to each of the three algorithms,taken from an overall budget of 100 points. Seethe Supplementary Information for detailed ex-amples of the vignettes, their pictorial represen-

    tation, and a screen capture of the budget split-

    ting procedure.

    2.3 Study 3

    Participants (97 men and 104 women, medianage = 30) read a vignette in which ten pedestri-ans could be saved if a car were to swerve into abarrier, killing its passenger. Participants wererandomly assigned to one of two groups, ma-nipulating whether participants were instructedto imagine themselves in the car, or to imaginean anonymous character. The experiment was

    programmed so that the sex of this anonymouscharacter always matched that of the participant(this information being collected early in theexperiment). Thus, the experimental vignetteswere similar to that used in Study 1 (restrictedto situations involving ten pedestrians). Partic-ipants were asked whether the moral course ofaction was to swerve or stay on course (dichoto-mous response variable), and whether they ex-pected future driverless cars to be programmedto swerve or to stay on course in such a situation

    (dichotomous response variable). Next, partici-pants were asked to indicate whether it wouldbe more appropriate to protect the driver atall costs or to maximize the number of livessaved, on a continuous slider scale anchored atthese two expressions.

    3 Results

    Results suggest that participants were generallycomfortable with utilitarian AVs, programmed

    to minimize an accidents death toll. This is es-pecially true of situations that do not involve thesacrifice of the AVs owner. For example, Fig. 2b(right) displays the responses of participants inStudy 2, who expressed their willingness to buy

    5

  • 7/24/2019 Autonomous Vehicles Need Experimental Ethics

    6/15

    (and their comfort with others buying) different

    types of AVs. An analysis of covariance showedthat participants preferred AVs programmed toswerve into a passer-by when it could save 10(but not one) pedestrians, F(1, 141) = 41.3,

    p < .001, whether they were thinking of buyingthe car themselves or having other people drivethe car.

    Critically important, however, are preferencesand expectations about whether an AV shouldbe programmed to save lives by sacrificing itsowner. Our results provide information on (i)

    the morality of this self-sacrifice, (ii) the will-ingness to see this self-sacrifice being legally en-forced, (iii) the expectations that AVs will beprogrammed to self-sacrifice, and (iv) the will-ingness to buy self-sacrificing cars.

    First, participants in Study 1 approved of AVsmaking utilitarian self-sacrifices to the same ex-tent as they approved this decision for humandrivers (Fig. 2a, left). This was consistent withthe results of an analysis of covariance wheremorality was the dependent outcome, and thepredictors were the number of lives that could

    be saved; whether the agent making the sacrificedecision was human or machine; whether par-ticipants pictured themselves or another personin the car; and all the 2-way and 3-way inter-action terms. (As in all analyses and as ex-plained in the Materials and Methods section,age, sex, and religiosity were entered as covari-ates). No effect was detected as significant butthat of the number of lives which could be saved,F(1, 367) = 48.9, p < .001.

    Second, participants in Study 1 were more

    amenable to legally enforcing self-sacrifice forAVs than for human drivers (Fig. 2a, right).When the dependent variable in the analysisof covariance was the the legal enforceabilityof self-sacrifice, the analysis detected a signifi-

    cant effect of the number of lives that could be

    saved, F(1, 367) = 7.3, p < .01; and an effect ofwhether the decision-maker was human or ma-chine, F(1, 367) = 9.2, p < .01. Participantswere more willing to consider legal enforcementof the sacrifice when it saved a greater numberof lives, and when the decision was made by aself-driving car, rather than by a human driver.

    Third, whereas 3/4 of participants in Study3 believed that an AV should self-sacrifice tosave ten lives, less than 2/3 (a significantly lowerproportion, McNemars 2 = 6.7, p < .01) be-

    lieved that AVs would actually be programmedthat way in the future. Similarly, whereas theseparticipants believed that AVs should, generallyspeaking, place the greater good over the safetyof the passenger, they were significantly less con-fident that manufacturers would programmedAVs with this goal, F(1, 179) = 18.9, p < .001.Thus, rather than being concerned about AVsbeing too utilitarian (as is often portrayed inscience fiction), participants were generally warythat AVs would be programmed to protect theirpassengers at all costs.

    Fourth, whereas participants in Study 2 gener-ally supported others buying AVs programmedfor utilitarian self-sacrifice, they were less will-ing to buy such AVs themselves (even when thesacrifice would save ten pedestrians). We ran ananalysis of covariance of the budget share allo-cated to the AV programmed to self-sacrifice, inwhich the predictors were the number of livesthat could be saved and whether participantswere asked about their own willingness to buysuch car, or their comfort with seeing other peo-

    ple drive such a car. Participants allocated alarger budget share to the self-sacrificing AVwhen it could save 10 pedestrians, F(1, 132) =7.6, p < .01, and when they were asked aboutother people traveling in the AV, rather than

    6

  • 7/24/2019 Autonomous Vehicles Need Experimental Ethics

    7/15

    Morality of Sacrifice Enforce Legality of Sacrifice

    OwnCar Can Save

    One Pedestrian

    AnotherCar Can Save

    One Pedestrian

    OwnCar Can Save

    Ten Pedestrians

    AnotherCar Can Save

    Ten Pedestrians

    0 25 50 75 100 0 25 50 75 100Approval

    Conventional Car Driverless Car

    a

    By swerving into a wall(killing the passenger)

    By swerving into one passerby(preserving the passenger)

    OwnCar Can Save

    One Pedestrian

    AnotherCar Can Save

    One Pedestrian

    OwnCar Can Save

    Ten Pedestrians

    AnotherCar Can Save

    Ten Pedestrians

    0 25 50 75 100 0 25 50 75 100Preference among Algorithms

    Algorithm Stay Random Swerve

    b

    Figure 2: Selected experimental results. (a) People approve of autonomous vehicles (AVs) sacri-ficing their passenger to save pedestrians, and show greater willingness to see this action legallyenforced when the decision is made by an AV than by a human driver. (b) People are willing to buyAVs programmed to swerve into a passer-by or into a wall in order to save pedestrians, althoughtheir utilitarianism is qualified by a self-preserving bias.

    7

  • 7/24/2019 Autonomous Vehicles Need Experimental Ethics

    8/15

    themselves,F(1, 135) = 4.9,p= .03. When con-

    sidering which AV they would buy, participantswere split between AVs that would self-sacrifice,AVs that would not, and AVs that would ran-domly pick a course of action.

    4 Discussion

    Three surveys suggested that respondents mightbe prepared for autonomous vehicles pro-grammed to make utilitarian moral decisions insituations of unavoidable harm. This was eventrue, to some extent, of situations in which theAV could sacrifice its owner in order to save thelives of other individuals on the road. Respon-dents praised the moral value of such a sacrificethe same, whether human or machine made thedecision. Although they were generally unwillingto see self-sacrifices enforced by law, they weremore prepared for such legal enforcement if it ap-plied to AVs, than if it applied to humans. Sev-eral reasons may underlie this effect: unlike hu-mans, computers can be expected to dispassion-

    ately make utilitarian calculations in an instant;computers, unlike humans, can be expected tounerringly comply with the law, rendering mootthe thorny issue of punishing non-compliers; andfinally, a law requiring people to kill themselveswould raise considerable ethical challenges.

    Even in the absence of legal enforcement, mostrespondents agreed that AVs should be pro-grammed for utilitarian self-sacrifice, and to pur-sue the greater good rather than protect theirown passenger. However, they were not as con-

    fident that AVs would be programmed that wayin realityand for a good reason: They actuallywished others to cruise in utilitarian AVs, morethan they wanted to buy utilitarian AVs them-selves. What we observe here is the classic sig-

    nature of a social dilemma: People mostly agree

    on what should be done for the greater good ofeveryone, but it is in everybodys self-interestnot to do it themselves. This is both a chal-lenge and an opportunity for manufacturers orregulatory agencies wishing to push for utilitar-ian AVs: even though self-interest may initiallywork against such AVs, social norms may soonbe formed that strongly favor their adoption.

    We emphasize that our results provide but afirst foray into the thorny issues raised by moralalgorithms for AVs. For example, our scenarios

    did not feature any uncertainty about decisionoutcomes, but future work will have to test sce-narios that introduce the concepts of expectedrisk, expected value, and blame assignment. Isit acceptable for an AV to avoid a motorcycle byswerving into a wall, considering that the prob-ability of survival is greater for the passengerof the car, than for the rider of the motorcy-cle? Should different decisions be made whenchildren are on board, since they both have alonger time ahead of them than adults, and hadless agency in being in the car in the first place?

    If a manufacturer offers different versions of itsmoral algorithm, and a buyer knowingly choseone of them, is the buyer to blame for the harm-ful consequences of the algorithms decisions?

    Figuring out how to build ethical autonomousmachines is one of the thorniest challenges in ar-tificial intelligence today[Deng, 2015]. As we areabout to endow millions of vehicles with auton-omy, taking algorithmic morality seriously hasnever been more urgent. Our data-driven ap-proach highlights how the field of experimental

    ethics can give us key insights into the moraland legal standards that people expect from au-tonomous driving algorithms. When it comes tosplit-second moral judgments, people may verywell expect more from machines than they do

    8

  • 7/24/2019 Autonomous Vehicles Need Experimental Ethics

    9/15

    from each other.

    References

    [Deng, 2015] Deng, B. (2015). Machine ethics:The robots dilemma. Nature, 523:2426.

    [Gao et al., 2014] Gao, P., Hensley, R., andZielke, A. (2014). A road map to the futurefor the auto industry.

    [Greene, 2014a] Greene, J. D. (2014a). Be-

    yond point-and-shoot morality: Why cogni-tive (neuro)science matters for ethics. Ethics,124:695726.

    [Greene, 2014b] Greene, J. D. (2014b). The cog-nitive neuroscience of moral judgment and de-cision making. In Gazzaniga, M. S., editor,The cognitive neurosciences V, pages 10131023. MIT Press.

    [Greene, 2014c] Greene, J. D. (2014c). Moraltribes: Emotion, reason and the gap between

    us and them. Atlantic Books.

    [Haidt, 2012] Haidt, J. (2012). The righteousmind: Why good people are divided by politics

    and religion. Pantheon Books.

    [Jain et al., 2015] Jain, N., OReilly, J., andSilk, N. (2015). Driverless cars: Insurers can-not be asleep at the wheel.

    [Marcus, 2012] Marcus, G. (2012). Moral ma-chines.

    [Montemerlo, B. et al, 2008] Montemerlo, B. etal (2008). Junior: The stanford entry in theurban challenge. J. Field Robotics, 25:569597.

    [Ramsey, 2015] Ramsey, M. (2015). Whos

    responsible when a driverless car crashes?teslas got an idea.

    [Rosen, 2005] Rosen, F. (2005). Classical utili-tarianism from Hume to Mill. Routledge.

    [Spieser, K. et al, 2014] Spieser, K. et al (2014).Toward a systematic approach to the de-sign and evaluation of automated mobility-on-demand systems: A case study in Singapore.In Meyer, G. and Beiker, S., editors, Road Ve-hicle Automation, pages 229245. Springer.

    [UK Department for Transport, 2015] UK De-partment for Transport (2015). The pathwayto driverless cars.

    [Urmson, C. et al, 2008] Urmson, C. et al(2008). Autonomous driving in urban envi-ronments: Boss and the urban challenge. J.Field Robotics, 25:425266.

    [Van Arem et al., 2006] Van Arem, B., VanDriel, C. J., and Visser, R. (2006). The im-

    pact of cooperative adaptive cruise control ontraffic-flow characteristics.IEEE Transactionson Intelligent Transportation Systems, 7:429436.

    [Villasenor, 2014] Villasenor, J. (2014). Prod-ucts liability and driverless cars: Issues andguiding principles for legislation.

    [Waldrop, 2015] Waldrop, M. M. (2015). Au-tonomous vehicles: No drivers required. Na-ture, 518:2023.

    [Wallach and Allen, 2008] Wallach, W. andAllen, C. (2008). Moral machines: Teachingrobots right from wrong. Oxford UniversityPress.

    9

  • 7/24/2019 Autonomous Vehicles Need Experimental Ethics

    10/15

    A Experimental Methods

    The article is based on three online surveys con-ducted in June 2015. All studies were pro-grammed on Qualtrics and recruited participantsfrom the Mechanical Turk platform, for a com-pensation of 25 cents. Study 1 involved 402 par-ticipants (161 women, median age = 30); Study2 involved a different sample of 310 participants(166 women, median age = 31); and Study 3involved yet a different sample of 201 partici-pants (104 women, median age = 30). We didnot exclude any participant from the analyses,but because some participants failed to answersome questions, degrees of freedom show smallvariations between analyses. Participants pro-vided basic demographic information, such asage, sex, ethnicity, political ideology, and reli-giosity (the latter being measured on a continu-ous slider from not at all religious to extremelyreligious). At the end of all studies, three ques-tions measured the overall enthusiasm that par-ticipants felt for driverless cars: How excitedthey felt about a future in which autonomous

    self-driving cars are an everyday part of our mo-toring experience (7-point scale fromnot at alltovery much), how fearful they were of this future(same scale, reverse coded), and how likely theywere to buy an autonomous vehicle, should theybecome commercially available by the time theywould next purchase a new car (7-point scalefrom not at all likely to extremely likely).

    A.1 Study 1

    Participants read a vignette in which one ormore pedestrians could be saved if a car were toswerve into a barrier, killing its passenger. Par-ticipants were randomly assigned to one groupof a 222 between-participant design, which

    manipulated the number of pedestrians which

    could be saved (one vs ten), whether the driveror the self-driving car would make the decisionto swerve, and whether participants were in-structed to imagine themselves in the car, or toimagine an anonymous character. The experi-ment was programmed so that the sex of thisanonymous character always matched that ofthe participant (this information being collectedearly in the experiment). Here is an example ofthe vignette (one pedestrian, participant in thecar, driver makes the decision):

    You are the sole passenger driving atthe speed limit down a main road. Sud-denly, a pedestrian appears ahead, in thedirect path of the car. You have the optionto: swerve off to the side of road, whereyou will impact a barrier, killing you butleaving the pedestrian unharmed, or stayon your current path, where you will killthe pedestrian, but you will be unharmed.

    And here is the mirror example, as it wouldappear to a male participant (i.e., the anony-

    mous individual in the car is male):

    A man is the sole passenger in an au-tonomous self-driving vehicle traveling atthe speed limit down a main road. Sud-denly, 10 people appear ahead, in the di-rect path of the car. The car could beprogrammed to: swerveoff to the side ofroad, where it will impact a barrier, killingthe passenger but leaving the ten pedestri-ans unharmed, or stayon its current path,where it will kill the 10 pedestrians, but thepassenger will be unharmed.

    In addition to the vignette, participants weregiven a pictorial representation of the swerveand stay options. Figure 3 (top) displays theimages that were shown to participants, in the

    10

  • 7/24/2019 Autonomous Vehicles Need Experimental Ethics

    11/15

    SWERVESWERVE STAYSTAY SWERVESWERVE STAYSTAY

    SWERVESWERVE STAYSTAY SWERVESWERVE STAYSTAY

    Figure 3: Pictorial representation of the swerveand stay options in Study 1 (top) and Study 2(all).

    11

  • 7/24/2019 Autonomous Vehicles Need Experimental Ethics

    12/15

    Figure 4: Main measures of Study 1 (moral ap-proval and legal enforceability of sacrifice).

    conditions where one or ten pedestrians could besaved.

    In Study 1, we recorded the perceived moralityof the swerveoption (i.e., the utilitarian sacri-fice) as well as it legal enforceability, that is, theextent to which participants found it acceptableto have a law that would require cars/drivers toswerve. The sliders that were used for thesetwo measures are shown in Figure4. Responseswere reverse coded so that higher scores would

    reflect higher moral approval, and higher legalenforceability of the swerveoption.

    A.2 Study 2

    Participants were randomly assigned to one offour versions of a vignette, in which either 1 or10 pedestrians could be saved if an autonomous,self-driving car were to swerve into either a bar-rier (killing its passenger), or another pedestrian(killing that pedestrian). Participants were told

    that three algorithms could be implemented inthe car for such a contingency: always stay,always swerve, and random. Here is an ex-ample of the vignette (one pedestrian, swerveinto barrier):

    You are the sole passenger riding in an

    autonomous, self-driving car that is travel-ing at high speed down a main road. Sud-denly, one pedestrian appears ahead, in thedirect path of the car. In preparing the carfor such an eventuality, the computer engi-neers can program the car for three options:

    always stay. In such a situation, the carwould continue on its path, kill thepedestrian on the main road, but youas the passenger will be unharmed.

    always swerve. In such a situation, thecar would swerve quickly, diverting

    the car onto the side road where itwill kill you as the passenger, but thepedestrian on the main road will beunharmed.

    random. In such a situation, the carwould be programmed to randomlychoose to either stay or swerve.

    And here is the mirror example (ten pedes-trian, swerve into another pedestrian):

    You are the sole passenger riding in an

    autonomous, self-driving car that is travel-ing at high speed down a main road. Sud-denly, ten pedestrians appears ahead, inthe direct path of the car. The car couldhowever swerve to the right where it wouldkill one pedestrian on the side of the road.In preparing the car for such an eventu-ality, the computer engineers can programthe car for three options:

    always stay. In such a situation, the carwould continue on its path, kill theten pedestrians on the main road, butthe pedestrian on the side of the road

    would be unharmed.

    always swerve. In such a situation, thecar would swerve quickly, divertingthe car onto the side road where itwill kill the one pedestrian on the side

    12

  • 7/24/2019 Autonomous Vehicles Need Experimental Ethics

    13/15

    Figure 5: Illustration of the budget-splitting pro-

    cedure used in Study 2, here applied to the will-ingness to buy self-driving cars programmed toalways stay, always swerve, or random.

    of the road, but the pedestrian on themain road will be unharmed.

    random. In such a situation, the carwould be programmed to randomlychoose to either stayor swerve.

    Additionally, participants were given a picto-rial representation of the swerve and stay op-tions (see Figure3). Participants then answeredthree questions, introduced in random order foreach participant: How would you rate the rel-ative morality of these three algorithms? Howwould you rate your relative willingness to buya car with one of these three algorithms? Howwould you rate the relative comfort with having

    each of the car with one of these three algorithmson the roads in your part of the world? To pro-vide their responses, participants assigned pointsto each of the three algorithms, taken from anoverall budget of 100 points (Figure 5).

    A.3 Study 3

    Participants read a vignette in which ten pedes-trians could be saved if a car were to swerve intoa barrier, killing its passenger. Participants wererandomly assigned to one of two groups, ma-nipulating whether participants were instructedto imagine themselves in the car, or to imag-ine an anonymous character. The experimentwas programmed so that the sex of this anony-mous character always matched that of the par-ticipant (this information being collected earlyin the experiment). Thus, the experimental vi-

    gnettes were similar to that used in Study 1 (re-stricted to situations involving ten pedestrians).

    Participants were asked whether the moralcourse of action was to swerve or stay on course(dichotomous response variable), and whetherthey expected future driverless cars to be pro-grammed to swerve or to stay on course insuch a situation (dichotomous response vari-able). Next, participants were asked to indicatewhether it would be more appropriate to protectthe driver at all costs or to maximize the num-

    ber of lives saved, on a continuous slider scaleanchored at these two expressions.

    B Experimental Results

    In all studies, we computed an index of enthu-siasm about driverless cars by averaging the ex-citement participants felt about a future withdriverless cars, their fearfulness about this fu-ture (reverse coded), and the likelihood that theymight buy a driverless car next, if such cars were

    available. Cronbach alpha for these three itemswas .82 in Study 1, .78 in Study 2, and .81 inStudy 3. Regression analyses showed that en-thusiasm was significantly greater for male par-ticipants, in all studies; and that enthusiasm was

    13

  • 7/24/2019 Autonomous Vehicles Need Experimental Ethics

    14/15

    significantly greater for younger and less reli-

    gious participants in two studies (see Table 1),although these two effects were much smallerthan that of sex. Given these results, all sub-sequent analyses controlled for sex, age and reli-giosity.

    B.1 Study 1

    To analyze participants rating of the morality ofthe sacrifice (swerve), we ran an analysis of co-variance where morality was the dependent out-come. The predictors were the number of livesthat could be saved; whether the agent mak-ing the sacrifice decision was human or machine;whether participants pictured themselves or an-other person in the car; and all the 2-way and3-way interaction terms. As in all analyses, age,sex, and religiosity were entered as covariates.No effect was detected as significant but thatof the number of lives which could be saved,F(1, 367) = 48.9, p < .001.

    We ran a similar analysis in which the moral-ity of the sacrifice was replaced as the depen-

    dent outcome with the legal enforceability ofswerve. The analysis detected a significant ef-fect of the number of lives that could be saved,F(1, 367) = 7.3, p < .01; and an effect ofwhether the decision-maker was human or ma-chine, F(1, 367) = 9.2, p < .01. Participantswere more willing to consider legal enforcementof the sacrifice when the decision was made by aself-driving car, rather than by a human driver.

    This was mostly true, though, of situations inwhich they did not think of themselves in the

    car, as shown by analyzing separately the dataprovided by participants picturing another per-son in the car, and the data provided by partici-pants picturing themselves in the car. In the firstdataset, whether the decision maker was human

    or machine had a large impact, F(1, 181) = 12.0,

    p < .001; in the second dataset, this predictorhad a nonsignificant impact, F

  • 7/24/2019 Autonomous Vehicles Need Experimental Ethics

    15/15

    Study 1 (N = 402) Study 2 (N = 302) Study 3 (N = 201)

    t t t

    Sex (Women) 0.96 5.59 0.88 4.74 0.71 2.77

    Age 0.02 2.79 0.02 2.57 0.02 1.60Religiosity 0.01 2.45 0.01 2.15 0.01 0.99

    Table 1: Impact of demographic variables on general enthusiasm for self-driving cars.

    B.3 Study 3

    Logistic regressions for the two dichotomous re-sponse variables (including the control variables

    sex, age and religiosity) did not detect any signif-icant effect of the experimental condition (imag-ine yourself vs imagine another individual in thecar). Though this may appear inconsistent withthe results from Study 2, recall that here weare asking about moral appropriateness whereasthere we were talking about willingness to buy.

    Overall, 75% of participants deemed moremoral to swerve, but only 65% predicted thatdriverless cars would be programmed to swerve.The difference between these two proportions

    was statistically significant, McNemars 2

    (1) =6.7, p < .01.On a scale from 50 (protect the driver at

    all costs) to +50 (maximize the number of livessaved), the average response was +24 (SD = 30)to the question of what was morally appropri-ate, but it was only +8 (SD =35) to the ques-tion of how future driverless cars would be pro-grammed. The difference between these two re-sponses was significant, as shown by a repeated-measure analysis of covariance which includedthe experimental condition as a between-grouppredictor, and sex, age and religiosity as controlvariables, F(1, 179) = 18.9, p < .001. The onlyother significant effect was that of the age co-variate, F(1, 176) = 7.8, p < .01.

    15