Yorick Wilks (ed.): “Close Engagements with Artificial Companions” (2010)

2
Künstl Intell (2012) 26:205–206 DOI 10.1007/s13218-012-0173-8 BOOK REVIEW Yorick Wilks (ed.): “Close Engagements with Artificial Companions” (2010) Ramin Yaghoubzadeh Published online: 25 February 2012 © Springer-Verlag 2012 Yorick Wilks’ compilation [1] was created in the wake of a 2007 workshop on artificial companions (ACs)—robots or virtual agents for keeping humans company and/or aiding in tasks over extended periods of time. Setting the stage, S. Turkle recounts a variety of situ- ations where people attribute animacy and agency to ob- jects in surprising ways, questioning the “authenticity” of humans. Y. Wilks introduces the field by pointing out peo- ple’s ability to transfer affect to animated dolls, and presents an artificial companion for the elderly, providing not only comfort but also utility, by aiding them in organizing au- tobiographical memories. He expresses the requirement for ACs to provide the rationale for their actions when asked. Such scrutability is lamentably not fully expanded upon in later parts. In Sect. 2, “Ethical and philosophical issues”, L. Floridi suggests that ACs might become helpers for older computer- savvy people rather than for computer illiterates. He adds that the insight that we might not be the only informational organisms follows a sequence of then-revolutionary insights throughout our history. S.G. Pulman identifies “Conditions for Companionhood”, namely an AC having intentions to- ward us, recognizing and treating us as individuals, mutu- ally predictable behavior, independence and effortlessness, adding that fluidity in interaction is also essential. K. O’Hara challenges the Turing test as a suitable test for ACs, further stating that ACs could work as part of a hybrid identity in a world that needs “globalized trust”, since tokens of identity in interaction have become more and more abstract “as our bodies disappear”. He notes that important tasks are already R. Yaghoubzadeh ( ) Sociable Agents Group, CITEC, Bielefeld University, P.O. Box 100131, 33501 Bielefeld, Germany e-mail: [email protected] entrusted to machines—e.g. autopilots, as Peltu and Wilks add later. In Sect. 3, “Social and psychological issues”, M.A. Bo- den argues that, barring technical and design problems, well- working conversationalist ACs could harm a person’s pri- vacy, seducing them towards self-disclosure, and not hon- oring secrets. She also questions ACs’ abilities to create true empathy by make-believe displays, and the morality of possible successes. J.J. Bryson takes the clear position that robots may never be viewed as persons, since this dehuman- izes humans, that ACs need not look human, and that de- ceiving people to view robots as people is inherently im- moral. To ensure no one is dehumanized, one must view robots as slaves, “servants you own”. She adds that interac- tion with ACs can provide benefits, but incurs costs: time that can not be spent on social interaction with humans. The prime obligation is for the developers not to design in a way that coerces users into ethical treatment of robots. Moral and legal responsibility lies fully with those who de- sign and deploy such systems, the systems are never cul- pable. D. Evans argues that total personalization would not suffice to make an AC a perfect romantic lover; it would also have to be able to ultimately reject its human partner, which would fulfill the “desire to be desired”, but be a bad selling point. D. Levy argues that humans might be able to fall in love with ACs, noting that people are prone to emo- tional bonds even to objects, probably far more so to person- alized ACs. Note the difference between long-term relation- ships and falling in love. W. Lowe highlights the possibility for ACs to help their users stick to self-change plans, ei- ther attempting to prevent actions that lead to later harm, or eliciting self-reflection after such actions. Distinctness and independence are required of such Companions to be taken seriously, while sensitivity is required to avoid impositions. D.M. Romano subsequently identifies requirements for be-

Transcript of Yorick Wilks (ed.): “Close Engagements with Artificial Companions” (2010)

Page 1: Yorick Wilks (ed.): “Close Engagements with Artificial Companions” (2010)

Künstl Intell (2012) 26:205–206DOI 10.1007/s13218-012-0173-8

B O O K R E V I E W

Yorick Wilks (ed.): “Close Engagements with ArtificialCompanions” (2010)

Ramin Yaghoubzadeh

Published online: 25 February 2012© Springer-Verlag 2012

Yorick Wilks’ compilation [1] was created in the wake of a2007 workshop on artificial companions (ACs)—robots orvirtual agents for keeping humans company and/or aiding intasks over extended periods of time.

Setting the stage, S. Turkle recounts a variety of situ-ations where people attribute animacy and agency to ob-jects in surprising ways, questioning the “authenticity” ofhumans. Y. Wilks introduces the field by pointing out peo-ple’s ability to transfer affect to animated dolls, and presentsan artificial companion for the elderly, providing not onlycomfort but also utility, by aiding them in organizing au-tobiographical memories. He expresses the requirement forACs to provide the rationale for their actions when asked.Such scrutability is lamentably not fully expanded upon inlater parts.

In Sect. 2, “Ethical and philosophical issues”, L. Floridisuggests that ACs might become helpers for older computer-savvy people rather than for computer illiterates. He addsthat the insight that we might not be the only informationalorganisms follows a sequence of then-revolutionary insightsthroughout our history. S.G. Pulman identifies “Conditionsfor Companionhood”, namely an AC having intentions to-ward us, recognizing and treating us as individuals, mutu-ally predictable behavior, independence and effortlessness,adding that fluidity in interaction is also essential. K. O’Harachallenges the Turing test as a suitable test for ACs, furtherstating that ACs could work as part of a hybrid identity in aworld that needs “globalized trust”, since tokens of identityin interaction have become more and more abstract “as ourbodies disappear”. He notes that important tasks are already

R. Yaghoubzadeh (�)Sociable Agents Group, CITEC, Bielefeld University,P.O. Box 100131, 33501 Bielefeld, Germanye-mail: [email protected]

entrusted to machines—e.g. autopilots, as Peltu and Wilksadd later.

In Sect. 3, “Social and psychological issues”, M.A. Bo-den argues that, barring technical and design problems, well-working conversationalist ACs could harm a person’s pri-vacy, seducing them towards self-disclosure, and not hon-oring secrets. She also questions ACs’ abilities to createtrue empathy by make-believe displays, and the morality ofpossible successes. J.J. Bryson takes the clear position thatrobots may never be viewed as persons, since this dehuman-izes humans, that ACs need not look human, and that de-ceiving people to view robots as people is inherently im-moral. To ensure no one is dehumanized, one must viewrobots as slaves, “servants you own”. She adds that interac-tion with ACs can provide benefits, but incurs costs: timethat can not be spent on social interaction with humans.The prime obligation is for the developers not to design ina way that coerces users into ethical treatment of robots.Moral and legal responsibility lies fully with those who de-sign and deploy such systems, the systems are never cul-pable. D. Evans argues that total personalization would notsuffice to make an AC a perfect romantic lover; it wouldalso have to be able to ultimately reject its human partner,which would fulfill the “desire to be desired”, but be a badselling point. D. Levy argues that humans might be able tofall in love with ACs, noting that people are prone to emo-tional bonds even to objects, probably far more so to person-alized ACs. Note the difference between long-term relation-ships and falling in love. W. Lowe highlights the possibilityfor ACs to help their users stick to self-change plans, ei-ther attempting to prevent actions that lead to later harm, oreliciting self-reflection after such actions. Distinctness andindependence are required of such Companions to be takenseriously, while sensitivity is required to avoid impositions.D.M. Romano subsequently identifies requirements for be-

Page 2: Yorick Wilks (ed.): “Close Engagements with Artificial Companions” (2010)

206 Künstl Intell (2012) 26:205–206

lievable human-like ACs, noting that it is fundamental thatthe user sense intelligence in the agent, while realistic looksare less important, and that humor and politeness are impor-tant factors, as are subtle effects of emotions on interactionparameters. Taylor, Jain and Swan trace the developmentof AI from its beginnings to the current notion that intel-ligence need not replicate human intelligence. They presenta line of research that develops autonomous robots that har-vest biological fuel, which could blur the boundary betweenthe non-organic—engineered—and the naturally-occurring.Y. Wilks then raises a number of points about ACs: thatthey could occupy a place between people and things, asdogs already do in English Common Law, that they might be“Victorian Companions”, reserved, discreet, but trustworthyentities, adding that the immediate expression of an “emo-tion” the agent “feels” is not always a good path to choose.Moreover, he makes a case for the disclosure of informationamong ACs, stating that privacy issues are essentially con-trollable, and that great benefits are imaginable, for instancefor shy users.

In Sect. 4, “Design issues”, Bee et al. present a virtualhead which visualizes emotions according to emotional cuesfrom users’ voices, potential modes of reciprocation couldreach from mirroring to deliberate empathy. Bevacqua etal. present the ECA “Greta”, the widely-used SAIBA ar-chitecture, and a feedback-giving listener agent built withthem. Catizone et al. present details of the Senior Com-panion project with its capability of breaking the bound-aries of a closed conversational scenario with suitable on-line lookups of comments and small-talk. R. Cowie empha-sizes the importance of emotion awareness, discounts sim-plistic models for emotion display as failing to capture im-portant aspects. Centrally, he provides important fundamen-tal questions about ACs: Is it acceptable to deceive users?Is it acceptable to let them interact with a good system, ifthat means they reduce engagement in the outside world?And what situations and costs can arise from misattribution?A. Newell argues against the badly-informed design oftenchosen by technical experts, and offers an interesting com-plement to user-centered design and ethnography especiallyfor user groups with special needs: the use of live theater andsubsequent discussion. A. Sloman produces a functional tax-onomy of ACs, contrasting engaging and enabling functions.He focuses on the latter and gives a sober-minded accountof their unfeasibility in the near future, except in small do-

mains: universal advice would entail very powerful reason-ing about the user, but also about properties in the world, likematerials. A. FT Winfield emphasizes the need for robots tolearn by imitation, also among robots, and offers a scenariowhere a group of specialized autonomous robots replacesone single AC, concluding that while they could have a the-ory of mind of the user, the inverse would likely not be thecase.

In Sect. 5, “Special purpose Companions”, Eynon andDavies analyze desired properties of an assistant AC for self-learning, pondering the benefits and possible problems. S.Nirenburg presents the Maryland Virtual Patient, an agentwith a powerful NLU and reasoning engine capable of sim-ulating effects of disease progression on behavior and rea-soning. Sharkey and Sharkey highlight the possible demo-graphic necessity of having care robots in the future, give anoverview of the existing work, and pose ethical questions:How firm may a robot grip in preventing self-harm? Lettingpeople treat toys as living things is deceit—but is such deceitalways a moral wrong? Peltu and Wilks conclude the bookby asking more questions and adding useful compilations ofAC features and guidelines.

The strengths of the volume are not any comprehensiveoverviews of technology or present design approaches, al-though a selection of them is found, nor any definitive an-swers to ethical or social issues, but rather the fact that manyimportant ones are raised—by proponents, skeptics and avidopponents of various concepts, from a plethora of relevantdisciplines. In the light of the theoretical and technical chal-lenges still to be solved, predictions about the availability ofuniversal helpers seem more vague than predictions aboutfusion power, even for optimists—ample time for discus-sion. The book offers vast opportunities for controversialdiscussion, making it a good choice as course material inthe ethics of technology. It is a very suitable ethics entrypoint and a highly recommended read especially for tech-nical people entering the field, and in particular for thosedesigning systems for the frail.

References

1. Wilks Y (ed) (2010) Close engagements with artificial companions:Key Social, Psychological, ethical and design issues. Benjamins,Amsterdam