MSc Presentation
-
Upload
marlon-etheredge -
Category
Documents
-
view
32 -
download
0
Transcript of MSc Presentation
Personality in Argumentative Agents
Marlon Etheredge
7 November 2016
Utrecht University
Table of Contents
Introduction
Argumentation
Personality
Personality Model
Reasoning
Software Implementation
Opponent Modelling
Conclusion
1
Introduction
Research Topic
• Software agents in argumentation dialogues
• Agents try to settle their differences, which has many
applications including (explaining) decision making [Zho+14],
training humans [Van11] and resolving conflicts in firewalls
[App+12]
• Argumentation can be in the form of deliberation, negotiation
and persuasion among others
• Increasing amount of applications requires adjustability of the
agent according to the application context
• Introducing personality in argumentative agents
2
Argumentation Dialogues
• This research focuses on persuasion dialogues
• Agents need to settle on a conflicting point of view, called the
topic
• As an example, two agents Paul and Otto, can settle their
differences regarding a topic: Science endangers humanity
• Paul, the proponent of the dialogue, starts by claiming the
topic of the dialogue
3
Persuasion
Peter: Science endangers humanity. (Making a claim)
Otto: Why do you think that science endangers humanity?
(Asking for support for the claim)
Peter: Since science brings about many new technologies
that could potentially harm human-beings.
(Providing support for the claim)
Otto: I agree with you that science brings about new
technologies. (Conceding the provided support for
the claim) But I disagree that this poses a threat to
humanity, since science primarily introduces new
technologies that improve the lives of human-beings.
(Providing a counter argument)
4
Persuasion
Peter: Science endangers humanity. (Making a claim)
Otto: Why do you think that science endangers humanity?
(Asking for support for the claim)
Peter: Since science brings about many new technologies
that could potentially harm human-beings.
(Providing support for the claim)
Otto: I agree with you that science brings about new
technologies. (Conceding the provided support for
the claim) But I disagree that this poses a threat to
humanity, since science primarily introduces new
technologies that improve the lives of human-beings.
(Providing a counter argument)
4
Persuasion
Peter: Science endangers humanity. (Making a claim)
Otto: Why do you think that science endangers humanity?
(Asking for support for the claim)
Peter: Since science brings about many new technologies
that could potentially harm human-beings.
(Providing support for the claim)
Otto: I agree with you that science brings about new
technologies. (Conceding the provided support for
the claim) But I disagree that this poses a threat to
humanity, since science primarily introduces new
technologies that improve the lives of human-beings.
(Providing a counter argument)
4
Persuasion
Peter: Science endangers humanity. (Making a claim)
Otto: Why do you think that science endangers humanity?
(Asking for support for the claim)
Peter: Since science brings about many new technologies
that could potentially harm human-beings.
(Providing support for the claim)
Otto: I agree with you that science brings about new
technologies. (Conceding the provided support for
the claim) But I disagree that this poses a threat to
humanity, since science primarily introduces new
technologies that improve the lives of human-beings.
(Providing a counter argument)
4
Peter: Why do you think that these technologies improve
the lives of human-beings? (Asking for support for
the counter argument)
Otto: Since these technologies provide for a method of
helping humans in situations where they would have
been helpless otherwise. In addition, improving the
lives of human-beings does not endanger humanity.
(Providing support for the counter argument)
Peter: OK, I agree that science introduces new technologies
that improve the lives of human-beings. Moreover, I
agree that improving the lives of human-beings does
not endanger humanity. (Conceding a claim)
5
Peter: Why do you think that these technologies improve
the lives of human-beings? (Asking for support for
the counter argument)
Otto: Since these technologies provide for a method of
helping humans in situations where they would have
been helpless otherwise. In addition, improving the
lives of human-beings does not endanger humanity.
(Providing support for the counter argument)
Peter: OK, I agree that science introduces new technologies
that improve the lives of human-beings. Moreover, I
agree that improving the lives of human-beings does
not endanger humanity. (Conceding a claim)
5
Peter: Why do you think that these technologies improve
the lives of human-beings? (Asking for support for
the counter argument)
Otto: Since these technologies provide for a method of
helping humans in situations where they would have
been helpless otherwise. In addition, improving the
lives of human-beings does not endanger humanity.
(Providing support for the counter argument)
Peter: OK, I agree that science introduces new technologies
that improve the lives of human-beings. Moreover, I
agree that improving the lives of human-beings does
not endanger humanity. (Conceding a claim)
5
Formalization
• Using Prakken’s dialogue framework [Pra05]
• Using ASPIC+ [Pra10] as a logic for defeasible argumentation
• Different attitudes of argumentative agents are built on top of
Parsons et al. [PWA03]
6
Research Direction
• Previously, reasoning of argumentative agents was primarily
based on game-theoretic approaches
• Personality in the reasoning process of argumentative agents
has not been studied frequently
• Studying the modelling of personalities of opponents of these
argumentative agents
7
Research Topic
• Personality in agents increases the adjustability of the agent’s
reasoning process, according to the context of the application
• By modelling the personality of the opponent, the agent can
include knowledge of the opponent’s personality and thus its
behavior in its reasoning process
• It is expected that personality in agents contributes to the
compatibility of human- and artificial intelligence- tasks
8
Research Topic
• Based on the configuration of the personality, the agent can
behave differently
• In addition, the agent can prefer or disprefer using certain
utterances or only use them under certain conditions
Quick decisions
The agent could be configured to easily accept claims by its
opponent, reaching decisions quicker
Absolute truth
The agent could be configured to only accept arguments when the
opponent uses irrefutable argumentation in scenarios where the
correctness of the outcome of a dialogue is critical
9
Research questions
1. How can personality be introduced to argumentative agents
for persuasion dialogues?
2. How can a model for personality in argumentative agents be
devised that allows argumentative agents to reason according
to a personality configuration?
3. How can an argumentative agent featuring personality be
implemented?
4. How can an argumentative agent featuring personality model
the personality of its opponent?
10
Research questions
1. How can personality be introduced to argumentative agents
for persuasion dialogues?
2. How can a model for personality in argumentative agents be
devised that allows argumentative agents to reason according
to a personality configuration?
3. How can an argumentative agent featuring personality be
implemented?
4. How can an argumentative agent featuring personality model
the personality of its opponent?
10
Research questions
1. How can personality be introduced to argumentative agents
for persuasion dialogues?
2. How can a model for personality in argumentative agents be
devised that allows argumentative agents to reason according
to a personality configuration?
3. How can an argumentative agent featuring personality be
implemented?
4. How can an argumentative agent featuring personality model
the personality of its opponent?
10
Research questions
1. How can personality be introduced to argumentative agents
for persuasion dialogues?
2. How can a model for personality in argumentative agents be
devised that allows argumentative agents to reason according
to a personality configuration?
3. How can an argumentative agent featuring personality be
implemented?
4. How can an argumentative agent featuring personality model
the personality of its opponent?
10
Approach
• Theoretical model of personality in argumentative agents
• Implementation of an argumentative agent featuring
personality based on Erik Kok’s BAIDD framework [Kok13]
• Method for modelling the personality of the opponent
11
Argumentation
Dung’s Abstract Framework
Science endangers
humanity.
Science does not
endanger human-
ity.
New technologies
harm humans.
New technologies
improve lives of
humans, improv-
ing lives does not
harm humans.
• Dung describes a formalism of argumentation [Dun95]
allowing for the description of arguments and attack:
• A set A of arguments
• A binary relation on arguments Def , called attack
• Allowing for different notions of acceptability describing the
status of arguments
• Used to draw conclusions based on AF = (A,Def )
12
Dung’s Abstract Framework
A B C D
• Dung describes a formalism of argumentation [Dun95]
allowing for the description of arguments and attack:
• A set A of arguments
• A binary relation on arguments Def , called attack
• Allowing for different notions of acceptability describing the
status of arguments
• Used to draw conclusions based on AF = (A,Def )
12
ASPIC+
• Argumentation system, instantiation of Dung’s system
• Sound (deductive) and unsound (defeasible) reasoning
• Contrariness function representing contrary relationships like
’harms humans’ and ’saves lives’
• Specification of preference orderings over defeasible inference
rules, arguments and a knowledge base
13
ASPIC+
• Arguments are constructed based on a knowledge base in an
argumentation system (K,≤′)• Strict rules of the form ϕ1, . . . , ϕn → ϕ
• Defeasible rules of the form ϕ1, . . . , ϕn ⇒ ϕ
• E.g. A = ϕ, A is an argument using no rules of inference,
• for rs = ϕ→ ψ, B = ϕ,ϕ ⊃ ψ → ψ, B is an argument using
a strict inference rule rs from which it without exception
follows that ψ,
• for rd = ϕ⇒ γ, C = ϕ,ϕ ⊃ γ ⇒ γ, C is an argument using a
defeasible inference rule rd from which it presumably follows
that γ
14
ASPIC+
• Three types of attack:
• An argument A undercuts an argument B if A attacks an
inference rule of B
• An argument A rebuts an argument B if A attacks the
conclusion of B
• An argument A undermines an argument B if A attacks a
premise of B
• Status of arguments:
• An argument is justified if it can make the opponent run out of
replies
• An argument is overruled if it is not justified and defeated by a
justified argument
• An argument is defensible if it is not justified but none of its
defeaters is justified
15
Prakken’s Abstract Framework
• Has a dialogue goal and at least two participants
• Communication language Lc , defining the available speech
acts
• Protocol Pr , governing the allows speech acts throughout the
dialogue
• Commitments, propositions the participant is expected to
defend publicly
• Effect rules C of speech acts in Lc , describing the effects of
speech acts on the commitments of the participants
• Outcome, turntaking- and termination rules
16
Liberal Dialogue Systems
• Framework specialized for persuasion dialogues
• Specifies a set of protocol rules for persuasion
• Defines a set of speech acts and corresponding effects rules
for persuasion
• Specifies corresponding turntaking-, outcome- and
termination rules
17
Speech Acts for Persuasion
Acts Attacks Surrenders
claim ϕ why ϕ concede ϕ
why ϕ argue A retract ϕ
argue Awhy ϕ(ϕ ∈ prem(A)
argue B(B defeats A)
concede ϕ
(ϕ ∈ prem(A) or ϕ = conc(A))
concede ϕ
retract ϕ
18
Effect Rules for Persuasion
• A participant that claim(ϕ) or concede(ϕ) commits himself to
ϕ
• A participant that retract(ϕ) uncommits himself to ϕ
• A participant that argue A commits himself to the premises of
A and the conclusion of A
19
Example Persuasion Dialogue
P1: claim endangersHumanity
O2: why endangersHumanity
P3: endangersHumanity since harmHumans
O4: why harmHumans
P5: harmHumans since newTechnologies
O6: concede newTechnologies O13: ¬harmHumans since helpingHumans
O7: claim ¬endangersHumanity
P8: why ¬endangersHumanity
O9: ¬endangersHumanity since helpingHumans
P10: why helpingHumans
O11: helpingHumans since newTechnologies
P12: concede helpingHumans
O14: retract endangersHumanity
20
Personality
Personality Theories
• Different stances in how ’personality’ should be investigated
including biological, evolutionary and behaviorist stances
• Personality traits subdivides the concept in measurable
patterns of humans’ behavior
• No consensus on the amount of traits
• Different taxonomies exist varying in the number of
personality traits and the description of them
• Five-factor model [JS99; MC08; Wig96], Eysenck Personality
Inventory [EE65], Myers-Briggs Type Indicator [MM10],
HEXACO [Ash+04]
21
FFM (OCEAN Model)
• Our agent’s personality model is based on the Five-factor
model (FFM)
• Describes the concept of personality in terms of five
personality traits (often referred to by the acronym OCEAN):
• Openness to Experience: Active seeking and appreciation of
experiences for their own sake
• Conscientiousness: Degree of organization, persistence, control
and motivation
• Extraversion: Quantity and intensity of energy directed
outwards into the social world
• Agreeableness: Kinds of interactions an individual prefers from
compassion to tough mindedness
• Neuroticism: Psychological distress
22
FFM (OCEAN Model)
Each personality trait is subdivided in six personality facets
O C E A N
Fantasy Competence Warmth Trust Anxiety
Aesthetics Order Gregariousness Straightforwardness Angry Hostility
Feelings Dutifulness Assertiveness Altruism Depression
Actions Achievement Striving Activity Compliance Self-Consciousness
Ideas Self-Discipline Excitement Seeking Modesty Impulsiveness
Values Deliberation Positive Emotions Tender-Mindedness Vulnerability
23
Personality Model
Personality Model
• Model describing the personality of the argumentative agent
• Divided in three components:
Personality Vector Associates for each personality facet
from the FFM a strength that indicates the
strength of the personality facet in the agent’s
personality
Attitude Acts as a condition that must be met before the
agent is allowed to act in an argumentative
dialogue
Reasoning System Defines, based on the personality of the
agent, the preference of the agent in terms of
speech acts and attitudes
24
Action Selection vs Revision
The personality model makes a distinction between two types of
personality facets and corresponding reasoning:
Action Selection
• Preference ordering over
speech act types
• E.g. an agent prefers
claiming over conceding
Action Revision
• Preference ordering over
attitudes
• E.g. an agent prefers to be
faithful over rigid when
conceding
25
Action Selection vs Revision
Action Selection
• Self-consciousness
• Assertiveness
• Actions
• Ideas
• Values
• Competence
Action Revision
• Achievement Striving
• Self-discipline
• Deliberation
• Activity
• Trust
• Straightforwardness
• Modesty
• Anxiety
• Angry Hostility
• Depression
26
Personality Theory
Some of the personality facets in the FFM are non-beneficial to the
personality model of the argumentative agent:
O C E A N
Fantasy Competence Warmth Trust Anxiety
Aesthetics Order Gregariousness Straightforwardness Angry Hostility
Feelings Dutifulness Assertiveness Altruism Depression
Actions Achievement Striving Activity Compliance Self-Consciousness
Ideas Self-Discipline Excitement Seeking Modesty Impulsiveness
Values Deliberation Positive Emotions Tender-Mindedness Vulnerability
27
Personality Theory
• Each personality trait and corresponding personality facet has
a description in the personality theory
• IPIP [Gol99] contains a comprehensive description of these
based on the NEO PI-R personality inventory [CM92]
• Facets included in our personality model are interpretations of
the descriptions in IPIP tailored to argumentative agents
28
Personality Facets
O C E A N
Actions Competence Assertiveness Trust Anxiety
Ideas Achievement Striving Activity Straightforwardness Angry Hostility
Values Self-Discipline Modesty Depression
Deliberation Self-Consciousness
Self-consciousness: ”Tendency to be shy or anxious”, not
preferring claim and argue, preferring to concede
29
Personality Facets
O C E A N
Actions Competence Assertiveness Trust Anxiety
Ideas Achievement Striving Activity Straightforwardness Angry Hostility
Values Self-Discipline Modesty Depression
Deliberation Self-Consciousness
Assertiveness: ”Social ascendancy and forcefulness of expression”,
preferring claim
29
Personality Facets
O C E A N
Actions Competence Assertiveness Trust Anxiety
Ideas Achievement Striving Activity Straightforwardness Angry Hostility
Values Self-Discipline Modesty Depression
Deliberation Self-Consciousness
Actions: ”Openness to new experiences on a practical level”,
preferring concede
29
Personality Facets
O C E A N
Actions Competence Assertiveness Trust Anxiety
Ideas Achievement Striving Activity Straightforwardness Angry Hostility
Values Self-Discipline Modesty Depression
Deliberation Self-Consciousness
Ideas: ”Intellectual curiosity”, preferring challenge
29
Personality Facets
O C E A N
Actions Competence Assertiveness Trust Anxiety
Ideas Achievement Striving Activity Straightforwardness Angry Hostility
Values Self-Discipline Modesty Depression
Deliberation Self-Consciousness
Values: ”Readiness to re-examine own values and those of
authority figures”, preferring retract
29
Personality Facets
O C E A N
Actions Competence Assertiveness Trust Anxiety
Ideas Achievement Striving Activity Straightforwardness Angry Hostility
Values Self-Discipline Modesty Depression
Deliberation Self-Consciousness
Competence: ”Belief in own self-efficacy”, disfavor retract and
accept, preferring argue
29
Personality Facets
O C E A N
Actions Competence Assertiveness Trust Anxiety
Ideas Achievement Striving Activity Straightforwardness Angry Hostility
Values Self-Discipline Modesty Depression
Deliberation Self-Consciousness
Achievement Striving: ”The need for personal achievement and
sense of direction”, preferring to achieve and defend its goal
30
Personality Facets
O C E A N
Actions Competence Assertiveness Trust Anxiety
Ideas Achievement Striving Activity Straightforwardness Angry Hostility
Values Self-Discipline Modesty Depression
Deliberation Self-Consciousness
Self-discipline: ”One’s capacity to begin tasks and follow through
to completion despite boredom or distractions”, preferring to add
to lines of dispute
30
Personality Facets
O C E A N
Actions Competence Assertiveness Trust Anxiety
Ideas Achievement Striving Activity Straightforwardness Angry Hostility
Values Self-Discipline Modesty Depression
Deliberation Self-Consciousness
Deliberation: ”Tendency to think things through before acting or
speaking”, preferring well-motivated moves
30
Personality Facets
O C E A N
Actions Competence Assertiveness Trust Anxiety
Ideas Achievement Striving Activity Straightforwardness Angry Hostility
Values Self-Discipline Modesty Depression
Deliberation Self-Consciousness
Activity: ”Pace of living”, preferring to move moves in a dialogue
30
Personality Facets
O C E A N
Actions Competence Assertiveness Trust Anxiety
Ideas Achievement Striving Activity Straightforwardness Angry Hostility
Values Self-Discipline Modesty Depression
Deliberation Self-Consciousness
Trust: ”Preference in believing others”, preferring attitudes that
help the agent concede
30
Personality Facets
O C E A N
Actions Competence Assertiveness Trust Anxiety
Ideas Achievement Striving Activity Straightforwardness Angry Hostility
Values Self-Discipline Modesty Depression
Deliberation Self-Consciousness
Straightforwardness: ”The tendency of a person to be direct and
frank in communication with others”, prefer not to be incoherent,
irrelevant or verbose
30
Personality Vector
• Let AS denote the set of action selection personality facets
and AR denote the set of action revision personality facets
• Let Strength(f ) 7→ R denote the strength of facet f with
f ∈ AS or f ∈ AR• An action selection personality vector PVAS is a vector
[Strength(f1), Strength(f2), . . . ,Strength(fn)] Such that
n = |AS| and f1, f2, . . . fn ∈ AS• An action revision personality vector PVAR is a vector
[Strength(f1), Strength(f2), . . . ,Strength(fn)] Such that
n = |AR| and f1, f2, . . . fn ∈ AR
31
Attitudes
• Attitudes specify under what condition an agent is allowed to
make a move containing a certain speech act
• Attitudes are associated with speech act types present in the
framework
• If the condition does not pass, the agent cannot make that
move
• In addition to a preference for certain speech act types, the
agent’s preference specifies preferences for certain attitudes
through action revision
• Attitudes were first introduced by Parsons et al. [PWA03] and
extended in this research
32
Attitudes
Assertion Attitudes
• Confident
• Careful
• Thoughtful
• Spurious
• Deceptive
• Hesitant
33
Attitudes
Acceptance Attitudes
• Credulous
• Cautious
• Skeptical
• Faithful
• Rigid
34
Attitudes
Challenge Attitudes
• Judicial
• Suspicious
• Persistent
• Tentative
• Indifferent
35
Attitudes
Retraction Attitudes
• Regretful
• Sensible
• Retentive
• Incongruous
• Determined
36
Attitudes
Argue Attitudes
• Hopeful
• Dubious
• Thorough
• Misleading
• Fallacious
• Devious
37
Summary
• Agent personality model as personality vector and attitudes
• Based on the FFM personality theory
• Distinction between action selection and action revision
• Model is extensible with more facets and attitudes, additional
or different descriptions and even different personality theories
38
Reasoning
Reasoning
• The agent’s reasoning process is taken care of by the agent’s
reasoning system consisting of two components:
Reasoning Rules Determine according to the strengths of
personality facets an output value used to
compute preference orderings
Reasoning Algorithm Uses these output values to generate
moves that are played by the agent in a dialogue
• The reasoning system makes use of a Mamdani Fuzzy
Inference System (FIS) [MA75] to compute preference values
based on reasoning rules and facets strengths
39
Mamdani Fuzzy Inference System
• Mamdani FIS computes output values based on a set of input
values, a membership distribution and a set of fuzzy rules
• Reasoning rules are implemented as fuzzy rules
• Using three fuzzy classes for input values: Low, Med and High
and two for output values: Favored and Disfavored
• Support for operators like and, or and not respectively
implemented as min(x , y), max(x , y) and 1− x
40
Reasoning Rules
if x is med and y is high then z is disfavored
if a is low and b is med then z is favored
41
Reasoning Rules
• Reasoning rules for action selection and action revision now
can be formalized, for example:
• if ideas is high then challenge is favored
• if deliberation is not high then thoughtful is disfavored
• Reasoning rules introduce a syntax for specifying the effects of
the strengths of facets in the agent’s personality on its
behavior
• Reasoning rules for action selection use strengths specified in
PVAS and output preference values for speech act types
• Reasoning rules for action revision use strengths specified in
PVAR and output preference values for attitudes
42
Reasoning Rules (Example)
Now, (complex) reasoning rules can be created that power the
reasoning system of the agent
43
Reasoning Rules (Example)
if actions is high
or selfconsciousness is high
then acceptance is favored
43
Reasoning Rules (Example)
if ideas is high
then challenge is favored
43
Reasoning Rules (Example)
if assertiveness is high
then assertion is favored
43
Reasoning Rules (Example)
if achievementstriving is high
and selfdiscipline is high
and straightforwardness is high
and modesty is low
and anxiety is low
and activity is high
and deliberation is high
then thoughtful is favored
43
Reasoning Rules (Example)
By combining all these reasoning rules, the agent’s behavior is
adjustable according to the definition of its personality vectors.
The implementation contains seven action selection reasoning rules
and 54 action revision reasoning rules.
43
Reasoning Algorithm
• Consists of an action selection algorithm:
1. Computes, given the agent’s personality, a preference ordering
over speech act types
2. Returns the preference ordering
44
Reasoning Algorithm
• And an action revision algorithm:
1. Takes an preference ordering over speech act types
2. Computes for each speech act type the preference ordering
over attitudes associated with that speech act type given the
agent’s personality
3. Tests whether the speech act type is allowed in a new move
4. If so, the new move is added to a set of moves that is
contributed to the dialogue
5. Returns the set of moves
44
Summary
• Reasoning rules for the implementation of reasoning according
to description of personality
• A reasoning algorithm for the computation of preference
ordering based on the personality of the agent
• Reasoning system of the agent allowing for introduction of
personality in an argumentative agent
45
Software Implementation
BAIPD
• BAIPD, a modified version of Kok’s testbed BAIDD [Kok13]
for experimentation with software agents in deliberation
dialogues
• BAIPD handles the persuasion dialogue process including
protocol rules, turn taking and requesting the agents to make
moves
• The platform contains an implementation of the
argumentative agent with personality making use of the
Fuzzylite library [Rad14] for an implementation of a FIS
46
Overview
47
Personality Vector
48
Action Selection
49
Action Revision
50
Dialog
51
Summary
• BAIPD, modified version of the BAIDD testbed by Erik Kok,
modified for persuasion (liberation dialogues)
• Implementation of the personality model
• Including reasoning rules and reasoning algorithm
52
Opponent Modelling
Opponent Modelling
Need for knowledge of the opponent
Suppose an agent prefers a faithful attitude. This agent faces a
spurious or deceptive opponent. Even though the agent would
typically concede easily, the agent can incorporate knowledge of its
opponent. If the agent would have a method of modelling the
opponent, the agent would prefer to concede less based on this
context.
53
Reasoning Scheme
• Reasoning scheme eliminating possible attitudes by the
opponent based on modus tollens
• Abduction on possible attitudes prunes the set of possible
attitudes
An agent with a confident attitude can assert any proposition
for which he can construct an argument.
The agent has asserted a proposition (claim), but cannot
construct an argument for the claim.
The confident attitude is not a possible attitude for the claim
move.
54
Pn: claim endangersHumanity
On+1: why endangersHumanity
Pn+2: endangersHumanity since harmHumans
• A : M≤∞ × {P,O} −→ ℘(AT ), with AT as the set of
attitudes
• A(d ,Pn) = {confident, careful, thoughtful}
55
Summary
• Continuously pruning the attitude status based on new moves
added to the dialogue
• Model the attitude statuses of moves in the dialogue using
something like a histogram
• Providing the histogram as input to a learning algorithm for
optimization of the agent’s strategy according to its
personality and its opponent’s personality model
56
Conclusion
Conclusion
How a personality can be introduced to argumentative agents for
persuasion dialogues (research question 1) and how a model for
personality in argumentative agents can be devised that allows
argumentative agents to reason according to a personality
configuration (research question 2):
57
Conclusion
• Using the FFM as a basis for a description of personality
• Defining a personality model as (i) a description of the
personality of an argumentative agent in 15 personality facets
(ii) a personality vector describing the agent’s personality
configuration
57
Conclusion
How an argumentative agent featuring personality can be
implemented (research question 3):
57
Conclusion
• Adjustment of Erik Kok’s BAIDD testbed, introducing BAIPD
• Introducing the reasoning system (i) to describe reasoning
rules that determine the effects on preferences based on the
personality configuration (ii) describing a reasoning algorithm
for reasoning based on the agent’s personality
57
Conclusion
On how an argumentative agent featuring personality can model
the personality of its opponent (research question 4):
57
Conclusion
• Maintaining an opponent modelling based on new moves and
corresponding commitments
• Determination of possible attitudes for moves in the dialogue
• Abduction to eliminate possible attitudes
57
Future Research
• Optimizing strategies of agents
• Extension of this research outside the field of argumentation
theory
• Investigating different theorems like ”a dialogue with two
agents that are achievement-striving increase the length of the
dialogue”
• Extension or adjustment of the personality descriptions in this
research
58
Thank you.
59