Agents, Knowledge, and Social Work: How People Interact With Complex Systems Stuart Watt Knowledge...

Post on 28-Mar-2015

215 views 0 download

Tags:

Transcript of Agents, Knowledge, and Social Work: How People Interact With Complex Systems Stuart Watt Knowledge...

Agents, Knowledge, and Social Work: How People

Interact WithComplex Systems

Stuart WattKnowledge Media Institute

Overview of the presentation

• What is an agent?• Interacting with agents• Agents at the interface• How people interact with agents• Summary

What is an agent, anyway?

• “Autonomous agents are computational systems that inhabit some complex dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are designed” – (Maes, 1995)

• “... a hardware or (more usually) software-based computer system that enjoys the following properties: autonomy..., social ability..., reactivity..., pro-activeness...” – (Wooldridge & Jennings, 1995)

• “One who does the actual work of anything, as distinguished from the instigator or employer; hence, one who acts for another” – OED

• “two common uses of the word agent: 1) one who acts, or who can act, and 2) one who acts in place of another with permission. Since ‘one who acts in place of ’ acts, the second usage requires the first. Hence, let’s go for a definition of the first notion” – (Franklin & Graesser, 1996)

Agents: a new kind of medium

• Agents have two possible mediating roles– Mediating between a person and a program– Mediating between people collaborating

• The distinction between these may be blurred– To participants in interaction, it may not

always be obvious what they are interacting with

– Mailing lists

• “Heterogeneous groupware”

Agents in theory and in practice

• Many existing systems, few existing successes– Notable failures: Luigi et al., Bob– Notable successes: Windows 95, Active Archive

• Questions to answer– What makes them different to programs?– Why do they fail or succeed?– How can we design them so they don’t fail?– When are agents better than programs?– Should we be using agent technology in the

first place?

What does an agent look like?

• Agents can be:– Defined by their internal behaviour– Defined by their external behaviour– Defined by what people think of them

• They can be presented as:– A virtual person (e.g., Phil, Bob)– Anthropomorphic (e.g., Peedy the

Parrot, Office Assistant)– Mechanomorphic (e.g., like a

computer)

Inside agents:beliefs, desires, and intentions

Basic emotions/Physiology Desire

Perception Belief

Intention Action

Hidden inthe agent

Early interface agents:Apple’s ‘Knowledge Navigator’

Anthropomorphism

A collection of anthropomorphic interfaces

Fundamental tensions of agency: choice, control, and explanation

• Agents need to be infallible– Agent works — success because of user– Agent doesn’t work — failure due to the agent

• People need to feel in control– Trust, responsibility, power, authority, accountability, privacy, respect, tolerance…

• How can we evaluate these?• Agents are alien invaders!

Is Windows an agent?

• On the pro’s, Windows is:– Autonomous– Reactive– Persistent– Collaborative– Difficult to manipulate (rather easier to manage)– Delegated the task of looking after the hardware

• On the con’s, Windows is:– Just not smart enough to be what we (usually)

think of as ‘agenty’

• Now you decide...

A taxonomy of agents

AutonomousAgents

BiologicalAgents

RoboticAgents

ComputationalAgents

SoftwareAgents

Artificial LifeAgents

Task-specificAgents

EntertainmentAgents

Viruses

Agents For learning with

• Different social roles– Agents as teachers– Agents as teachers’ assistants– Agents as assessors– Agents as students’ assistants– Agents as co-learners– Agents as facilitators

• Work with the existing social structures, not against them

Luigi and the ‘least common denominator’ approach

• Design rationale:– Narrow, domain-specific assistants– Designed to overcome role conflict– Most burden on those who can gain the most– Little or no burden on those who gain the least

• What actually happened?– Technically, the system works fine– Usability si, Acceptability non– Who sends the email?

The Virtual Participant

• Use of Electronic Conferencing growing– Required component of many courses– Distributed student body– Supplement to tutorials

• This has problems– Not all students use it– It can be expensive– Some students have no other contact– It does not offer enough to students

Goals

• To re-use the knowledge contained in discussions from previous years;

• To reduce the load on the tutors from answering common problems;

• To encourage students to use the technology and participate

• To provide some support to students which is always available.

So how does it work?

• Another ‘Participant’– Exactly the same access as all students– No special hardware or software required

• Reads all messages in chosen conferences– Stores contents of every message– Stores ‘History’ of every message

• Keyword and phrase matching– Stored from all messages in a thread– Considers all possible stories– Threshold for triggering

Example interaction

Choice highlights

• Questions we asked:– Should continue to be used: 90%

agreed– Name: 79% agreed– Put me off: 11%– Reduce discussion: 9% agreed– Relevant: 95% agreed– Direct to participants: 15% agreed

The Active Archive

• Embedded in a conferencing system– Tracks threads of discussion– Posts relevant ‘stories’

• 4 dimensions– Anthropomorphism versus

mechanomorphism– Private versus public– Closed versus open– Fixed versus extensible

• Social role: A bard

The importance of story-telling

• Story-telling systems– Conversational– Memorable– Easy to integrate with existing

knowledge– Easy to integrate with conferencing

• Increasing engagement and motivation

KMi Planet

Open Book

Fundamental tensions of agency: control, and explanation

• Agents need to be infallable• Agent works – success because of the user• Agent doesn’t work – failure due to the

agent• People need to feel in control

– Trust, responsibility, power, authority, accountability, privacy, respect, tolerance...

• How can we evaluate/assess/theorise about these?– Theories of social cognition

‘Agent’ is a fuzzy category

0% 20% 40% 60% 80%

DistributednessMonitorability

Formal languageSeriality

ComplexityHelpfulness

VisibilityAcquiesence

IdentityInternal states

PersistenceReal w orld

PersonaMobility

Intelligence

AdaptivityFlexibility

Trustw orthinessRationality

DelegationSituatedness

CommunicativenessGoal-orientedness

ReactivityAutonomy

Published definition frequency Survey frequency

Different stories, same system

Internal viewExternal view

Social roles: agents as assistants

• Agents are often thought of as assistants– (e.g., Maxims, Maes, 1994)– Delegation is central– Agents are ‘autonomous’

Social roles and the theatrical metaphor

• Derived from Goffman, Parsons, Laurel• Some typical social roles for agents

– As assistants (e.g. Abecker et al., 1998)– As matchmakers (e.g. Foner & Crabtree, 1996)– As librarians (e.g. Watt, 1998)– As reporters (e.g. Domingue & Scott, 1998)– As editors (e.g. Domingue & Scott, 1998)– As critics (e.g. Fischer et al., 1990)– As oracles (e.g. Ackerman, 1994)– As bards (e.g. Masterton, 1997, 1998)– As Village gossips (e.g. Krulwich & Burkey, 1996,

1997)

The moral: finding the right balance

• Not all agents are assistants– Social rules for assistants may not apply

• Agents have an awkward line to tread:– Ignorance and informed– Responsibility and autonomy– Privacy and publicity– Trust and fear– Power and obedience

• “Now, bearing all that in mind, should I use it?”

• Don’t expect cognitive theories to tell you everything!

Future opportunities

• Knowledge management– Open Book, the Virtual Participant, and beyond

• Automated assessment– Use of VP technologies, giving students

additional feedback in formative assessment

• Engaging materials– Games use agents, engagement increases

motivation, motivation increases retention

• Out of hours support– Agents can provide support outside working

hours

Summary

• People treat programs psychologically rather than physically

• Agents and objects– Don’t expect people to manipulate agents– Don’t expect cognitive theories to be enough– Socialise computers, don’t mechanise people

• Agents will work, if we:– Respect the complexity of human social life– Respect people