Programming for Social Scientists Lecture 4 UCLA Political Science 209-1: Programming for Social...
-
date post
19-Dec-2015 -
Category
Documents
-
view
218 -
download
1
Transcript of Programming for Social Scientists Lecture 4 UCLA Political Science 209-1: Programming for Social...
Programming for Social Scientists
Lecture 4UCLA Political Science 209-1: Programming for Social
Scientists
Winter 1999
Lars-Erik Cederman & Benedikt Stefansson
POL SCI 209-1 Cederman / Stefansson
2
Exercise 1bint matrix[2][2] = {{3,0},{5,1}};
@implementation Player...-setRow: (int) r Col: (int) c {
if (rowPlayer) {row = r;col = c;
} else {row = c;col = r;
}return self;
}-(BOOL)move {
return matrix[!row][col] > matrix[row][col];}
POL SCI 209-1 Cederman / Stefansson
3
Exercise 1cint matrix[2][2][2] = {{{3,0},{5,1}},
{{3,1},{5,0}}};
@implementation Player
-init: (int)n rowPlayer: (BOOL)rp playerType: (int)pt {name = n;rowPlayer = rp;playerType = pt;return self;
}...-(BOOL)move {
return matrix[playerType][!row][col] > matrix[playerType][row][col];
}
POL SCI 209-1 Cederman / Stefansson
4
Exercise 1c (cont'd) player1 = [Player create: globalZone]; player2 = [Player create: globalZone]; for (pt=0; pt<2; pt++) { [player1 init: 1 rowPlayer: YES playerType: pt]; [player2 init: 2 rowPlayer: NO playerType: pt]; for (r=0; r<2; c++) { printf("+---+---+\n"); printf("|");
for (c=0; c<2; c++) { [player1 setRow: r Col: c]; [player2 setRow: r Col: c]; if ([player1 move] !! [player2 move]) printf(" |"); else printf(" * |"); } printf("\n"); } } printf("+---+---+\n");
POL SCI 209-1 Cederman / Stefansson
5
Exercise 2: Player.m
@implementation Player
-init: (int) n {
name = n;
alive = YES;
return self;
}
-setOther: o {
other = o;
return self;
}
-(BOOL)isAlive {
return alive;
}
play: r {
int shot;
[r load];
shot = [r trigger];
if (shot)
alive = NO;
else
[other play: r];
return self;
}
@end
POL SCI 209-1 Cederman / Stefansson
6
Exercise 2: Revolver.m...
#import <stdlib.h>
@implementation Revolver
-empty {
bullets = 0;
return self;
}
-load {
bullets++;
return self;
}
-(BOOL)trigger {
return (double)rand()/(double)RAND_MAX < bullets/6.0;
}
@end
POL SCI 209-1 Cederman / Stefansson
7
Prisoner's Dilemma Game
Player 2
C D
C 3,3 0,5
Player 1
D 5,0 1,1
POL SCI 209-1 Cederman / Stefansson
8
Iterated Prisoner's Dilemma
• repetitions of single-shot PD
• "Folk Theorem" shows that mutual cooperation is sustainable
• In The Evolution of Cooperation, Robert Axelrod (1984) created a computer tournament of IPD– cooperation sometimes emerges– Tit For Tat a particularly effective strategy
POL SCI 209-1 Cederman / Stefansson
9
One-Step Memory Strategies
C
D
C
D
p
q
Memory:C
D
Strategy = (i, p, q)i = prob. of cooperating at t = 0p = prob. of cooperating if opponent cooperatedq = prob. of cooperating if opponent defected
t-1 t
POL SCI 209-1 Cederman / Stefansson
10
The Four Strategies(cf. Cohen et al. p. 8)
Name i p q
all-C 1 1 1
TFT 1 1 0
aTFT 0 0 1
all-D 0 0 0
POL SCI 209-1 Cederman / Stefansson
11
A four-iterations PD
t0 1 2 3 4
Row
Column
i
i
{C,D}
{C,D}
p,q
U
U
U U
U U U
U
= S
= S+ + +
+ + +
POL SCI 209-1 Cederman / Stefansson
12
all-D meets TFT
t0 1 2 3 4
Row(all-D)
Column(TFT)
i=0
i=1
p=q=0
0
5
1 1
1 1 1
1
= 8
= 3+ + +
+ + +
D
D
C
D
D
D
D
D
D
D
p=1; q=0
CumulatedPayoff
POL SCI 209-1 Cederman / Stefansson
13
Moves and Total Payoffs for all4 x 4 Strategy Combinations
Other’s Strategy
all-C TFT aTFT all-D
OwnStrategy
pay/ summove
pay/ summove
pay/ summove
pay/ summove
all-C 3333 12 3333 12 0000 0 0000 0
TFT 3333 12 3333 12 0153 9 0111 3
aTFT 5555 20 5103 9 1313 8 1000 1
all-D 5555 20 5111 8 1555 16 1111 4
Source: Cohen et al. Table 3, p. 10
POL SCI 209-1 Cederman / Stefansson
14
simpleIPD: File structure
main.m ModelSwarm.m Player.m
ModelSwarm.h Player.h
POL SCI 209-1 Cederman / Stefansson
15
simpleIPD: main.mint main(int argc, const char ** argv) { id modelSwarm;
initSwarm(argc, argv);
modelSwarm = [ModelSwarm create: globalZone];
[modelSwarm buildObjects]; [modelSwarm buildActions]; [modelSwarm activateIn: nil]; [[modelSwarm getActivity] run]; return 0;}
POL SCI 209-1 Cederman / Stefansson
16
The ModelSwarm
• An instance of the Swarm class can manage a model world
• Facilitates the creation of agents and interaction model
• Model can have many Swarms, often nested
main
Player1
ModelSwarm
Player2
POL SCI 209-1 Cederman / Stefansson
17
simpleIPD: ModelSwarm.h...
@interface ModelSwarm: Swarm {
id player1,player2;
int numIter;
id stopSchedule, modelSchedule, playerActions;
}
+createBegin: (id) aZone;
-createEnd;
-updateMemories;
-distrPayoffs;
-buildObjects;
-buildActions;
-activateIn: (id) swarmContext;
-stopRunning;
@end
POL SCI 209-1 Cederman / Stefansson
18
Creating a Swarm
I. createBegin,createEnd – Initialize memory and parameters
II. buildObjects– Build all the agents and objects in the model
III. buildActions– Define order and timing of events
IV. activate– Merge into top level swarm or start Swarm running
POL SCI 209-1 Cederman / Stefansson
19
Step I: Initializing the ModelSwarmint matrix[2][2]={{1,5},{0,3}};
@implementation ModelSwarm
+createBegin: (id) aZone {
ModelSwarm * obj;
obj = [super createBegin:aZone];
return obj;
}
-createEnd {
return [super createEnd];
}
12
3
4
POL SCI 209-1 Cederman / Stefansson
20
Details on createBegin method
• The “+” indicates that this is a class method as opposed to “-” which indicates an instance method
• ModelSwarm * obj– indicates to compiler
that obj is statically typed to ModelSwarm class
• [super ...]– Executes createBegin
method in the super class of obj (Swarm) and returns an instance of ModelSwarm
1
2
3
POL SCI 209-1 Cederman / Stefansson
21
Memory zones
• The Defobj super class provides facilities to create and drop an object through
• In either case the object is created “in a memory zone”
• Effectively this means that the underlying mechanism provides enough memory for the instance, it’s variables and methods.
• The zone also keeps track of all objects created in it and allows you to reclaim memory simply by dropping a zone. It will signals to all objects in it to destroy themselves.
4
POL SCI 209-1 Cederman / Stefansson
22
In main.m: modelSwarm= [ModelSwarm create: globalZone];
Where did that zone come from?
In main.m : initSwarm (argc, argv);
In ModelSwarm.m: +createBegin:
Executes variousfunctions in defobj and simtools
which create a global memory zoneamong other things
create: method is implementedin defobj, superclass of the Swarm class
and it calls the createBegin:method in ModelSwarm
POL SCI 209-1 Cederman / Stefansson
23
Step II: Building the agents-buildObjects {
player1 = [Player createBegin: self];
[player1 initPlayer];
player1 = [player1 createEnd];
player2 = [Player createBegin: self];
[player2 initPlayer];
player2 = [player2 createEnd];
[player1 setOtherPlayer: player2];
[player2 setOtherPlayer: player1];
return self;
}
POL SCI 209-1 Cederman / Stefansson
24
Details on the buildObjects phase
• The purpose of this method is to create each instance of objects needed at the start of simulation, and then to pass parameters to the objects
• It is good OOP protocol to provide setX: methods for each parameter X we want to set, as in: [player1 setOtherPlayer: player2]
POL SCI 209-1 Cederman / Stefansson
25
Why createBegin vs. create?
• Using createBegin:, createEnd is appropriate when we want a reminder that the object needs to initialize something, calculate or set (usually this code is put in the createEnd method).
• Always use createBegin with createEnd to avoid messy problems
• But create: is perfectly fine if we just want just to create an object without further ado.
POL SCI 209-1 Cederman / Stefansson
26
simpleIPD: ModelSwarm.m (cont'd)-updateMemories {
[player1 remember];
[player2 remember];
return self;
}
-distrPayoffs {
int action1, action2;
action1 = [player1 getNewAction];
action2 = [player2 getNewAction];
[player1 setPayoff: [player1 getPayoff] +
matrix[action1][action2]];
[player2 setPayoff: [player2 getPayoff] +
matrix[action2][action1]];
return self;
}
POL SCI 209-1 Cederman / Stefansson
27
simpleIPD: Player.h@interface Player: SwarmObject {
int time, numIter;
int i,p,q;
int cumulPayoff;
int memory;
int newAction;
id other;
}
-initPlayer;
-createEnd;
-setOtherPlayer: player;
-setPayoff: (int) p;
-(int)getPayoff;
-(int)getNewAction;
-remember;
-step;
@end
POL SCI 209-1 Cederman / Stefansson
28
simpleIPD: Player.m@implementation Player -initPlayer { time = 0; cumulPayoff = 0; i = 1; // TFT p = 1; q = 0; newAction = i; return self;}-createEnd { [super createEnd]; return self;}-setOtherPlayer: player { other = player; return self;}-setPayoff: (int) payoff { cumulPayoff = payoff; return self;}
-(int) getPayoff { return cumulPayoff;}-(int) getNewAction { return newAction;}-remember { memory = [other getNewAction]; return self;}-step { if (time==0) newAction = i; else { if (memory==1) newAction = p; else newAction = q; } time++; return self;}
POL SCI 209-1 Cederman / Stefansson
29
Step III: Building schedules-buildActions {
stopSchedule = [Schedule create: self];
[stopSchedule at: 12 createActionTo: self message: M(stopRunning)];
modelSchedule = [Schedule createBegin: self];
[modelSchedule setRepeatInterval: 3];
modelSchedule = [modelSchedule createEnd];
playerActions = [ActionGroup createBegin: self];
playerActions = [ActionGroup createEnd];
[playerActions createActionTo: player1 message: M(step)];
[playerActions createActionTo: player2 message: M(step)];
[modelSchedule at: 0 createActionTo: self message:M(updateMemories)];
[modelSchedule at: 1 createAction: playerActions];
[modelSchedule at: 2 createActionTo: self message: M(distrPayoffs)];
return self;
}
POL SCI 209-1 Cederman / Stefansson
30
Schedules
• Schedules define event in terms of:– Time of first invocation
– Target object
– Method to call
t t+1 t+2
[m update] [m distribute]
[schedule at: t createActionTo: agent message: M(method)]1 2 3
2
1
3
POL SCI 209-1 Cederman / Stefansson
31
ActionGroups
• Group events at same timestep
• Define event in terms of:– Target object
– Method to call
t=1 t=2 t=3
[m update] [m distribute]
[p1 step]
[p2 step]
[actionGroup createActionTo: agent message: M(method)]2 3
2
3
POL SCI 209-1 Cederman / Stefansson
32
Implementationschedule=[Schedule createBegin: [self getZone]];
[schedule setRepeatInterval: 3];
schedule=[schedule1 createEnd];
[schedule at: 1 createActionTo: m message: M(update)];
[schedule at: 3 createActionTo: m message: M(distribute)];
actionGroup=[ActionGroup createBegin: [self getZone]];
[actionGroup createEnd];
[actionGroup createActionTo: p1 message: M(step)];
[actionGroup createActionTo: p2 message: M(step)];
[schedule at: 2 createAction: actionGroup];
t
t+1
t+4
t+2
t+3
...
POL SCI 209-1 Cederman / Stefansson
33
Step IV: Activating the Swarm
-activateIn: (id) swarmContext { [super activateIn: swarmContext]; [modelSchedule activateIn: self]; [stopSchedule activateIn: self]; return [self getActivity];}
-stopRunning { printf("Payoffs: %d,%d\n",[player1 getPayoff], [player2 getPayoff]); [[self getActivity] terminate]; return self;}
POL SCI 209-1 Cederman / Stefansson
34
Activation of schedule(s)
In main.m:[modelSwarm activateIn: nil];
-activateIn: (id) swarmContext
[modelSchedule activateIn: self]
There is only one Swarm so we activate
it in nil
This one line could setin motion complex scheme of merging and activation