1 Game theory and security Jean-Pierre Hubaux EPFL With contributions (notably) from M. Felegyhazi,...
-
Upload
anastasia-lewis -
Category
Documents
-
view
216 -
download
0
Transcript of 1 Game theory and security Jean-Pierre Hubaux EPFL With contributions (notably) from M. Felegyhazi,...
1
Game theory and security
Jean-Pierre HubauxEPFL
With contributions (notably) from M. Felegyhazi, J. Freudiger, H. Manshaei, D. Parkes, and M. Raya
Security Games in Computer Networks
• Security of Physical and MAC Layers• Mobile Networks security• Anonymity and Privacy• Intrusion Detection Systems• Sensor Networks Security• Security Mechanisms• Game Theory and Cryptography• Distributed Systems
– More information: http://lca.epfl.ch/gamesec
Security of physical and MAC layers (1/2)
1. Zhu Han, N. Marina, M. Debbah, A. Hjorungnes, “Physical layer security game: How to date a girl with her boyfriend on the same table,” in GameNets 2009.
2. E. Altman, K. Avrachenkov, A. Garnaev, “Jamming in wireless networks: The case of several jammers,” GameNets 2009.3. W. Trappe, A. Garnaev, “An eavesdropping game with SINR as an object function,” SecureComm 2009.
Alice Bob
Eavesdropper
PaymentPlayers: Bob and his Jamming Friends
Objective: Avoid eavesdropping on Bob Alice communication
Cost: Payment to jammer and interference to Bob and Alice
Game model: Stackelberg gameGame results: Existence of equilibrium and design a protocol to avoid eavesdropping
J1
J2
JN
Security of physical and MAC layers (2/2)
Y.E. Sagduyu, R. Berry, A. Ephremides, “MAC games for distributed wireless network security with incomplete information of
selfish and malicious user types,” GameNets 2009.
M
S
S W
W
Players (Ad hoc or Infrastructure mode): 1. Well-behaved (W) wireless modes2. Selfish (S) - higher access probability3. Malicious (M) - jams other nodes (DoS)
Objective: Find the optimum strategy against M and S nodes
Reward and Cost: Throughput and Energy
Game model: A power- controlled MAC game solved forBayesian Nash equilibrium
Game results: Introduce Bayesian learning mechanism to update the type belief in repeated games
Optimal defense mechanisms against denial of service attacks in wireless networks
Sensor networks security
A. Agah, M. Asadi, S. K. Das, “Prevention of DoS Attack in Sensor Networks using Repeated Game Theory,” ICWN 2006.
Least damageLeast damage False Positive
False Negative Best Choice for IDS
Best Choice for IDS
IDS at BS
Sen
sor N
od
e
CatchMiss
No
rmal
(Fw
r Pkts) M
aliciou
s(Drop P
kts)
Players: 1. IDS (residing at the base station) 2. Sensor nodes (some nodes could act maliciously: drop packets)
Game: A repeated game between IDS and nodes to detect the malicious nodesGame Results: Equilibrium calculation in infinite repeated game and using the results
to evaluate reputation of nodes
Game Theory and Security Mechanism: dealing with uncertainty
1. K. C. Nguyen, T. Alpcan, and T. Basar, “Security games with incomplete information,” in ICC 2009.
2. A. Miura-Ko, B. Yolken, N. Bambos, and J. Mitchell, "Security Investment Games of Interdependent Organizations," Allerton
Conference on Communication, Control, and Computing, Allerton, IL, September 2008.
Players: Attacker and Defender
Objective: Find the optimal strategy given the strategy of opponent
Strategies: “Attack” or “Not to attack”, “Defend” or “Not to defend”
Decision Process: Fictitious Play (FP) game
Game model: Discrete-time repeated nonzero-sum matrix game
But players observe their opponent’s actions with error
Study of the effect of observation errors on the convergence to the NE with classical and stochastic FP games
Cryptography Vs. Game TheoryIssue Cryptography Game Theory
Incentive None Payoff
Players Totally honest/malicious
Always rational
Punishing cheaters
Outside the model
Central part
Solution concept
Secure protocol Equilibrium
Early stopping Problem Not an issue
7
Y. Dodis, S. Halevi, T. Rubin. A cryptographic Solution to a Game Theoretic Problem.Crypto 2000
Crypto and Game Theory
8
Cryptography Game Theory
Implement GT mechanisms in a distributed fashionExample: Mediator (in correlated equilibria)
Dodis et al., Crypto 2000
Design crypto mechanisms with rational playersExample: Rational Secret Sharing and Multi-Party Computation
Halpern and Teague, STOC 2004
Improving Nash equilibria (1/2)
4, 4 1, 5
5, 1 0, 0
9
Chicken
Chicken
Dare
Dare
3 Nash equilibria: (D, C), (C, D), (½ D + ½ C, ½ C+ ½ D)
Payoffs: [5, 1] [1, 5] [5/2, 5/2]
The payoff [4, 4] cannot be achieved without a binding contract, because it is notan equilibrium
Possible improvement 1: communicationToss a fair coin if Head, play (C, D); if Tail, play (D, C) average payoff = [3, 3]
Y. Dodis, S. Halevi, and T. Rabin. A Cryptographic solution to a game theoretic problem, Crypto 2000
Player 1
Player 2
Improving Nash equilibria (2/2)
10
Possible improvement 2: Mediator
Introduce an objective chance mechanism: choose V1, V2, or V3 with probability 1/3 each. Then:- Player 1 is told whether or not V1 was chosen and nothing else- Player 2 is told whether or not V3 was chosen and nothing else
If informed that V1 was chosen, Player 1 plays D, otherwise CIf informed that V3 was chosen, Player 2 plays D, otherwise CThis is a correlated equilibrium, with payoff [3 1/3, 3 1/3] It assigns probability 1/3 to (C, C), (C, D), and (D, C) and 0 to (D, D)
How to replace the mediator by a crypto protocol: see Dodis et al.
4, 4 1, 5
5, 1 0, 0
Chicken
Chicken
Dare
DarePlayer 1
Player 2
Design of cryptographic mechanisms with rational players: secret sharing
11
a. Share issuer
S1
Secret
S3
S2
Agent 1
Agent 2
Agent 3
b. Share distribution
Reminder on secret sharing
Agent 1
Agent 2
Agent 3
S1
S2
S3
c. Secret reconstruction
S1
S2
S3
The temptation of selfishness in secret sharing
12
Agent 1
Agent 2
Agent 3
S1
S2
S3
• Agent 1 can reconstruct the secret• Neither Agent 2 nor Agent 3 can
• Model as a game:• Player = agent• Strategy: To deliver or not one’s share (depending on what the other players did)• Payoff function:
• a player prefers getting the secret• a player prefers fewer of the other get it
• Impossibility result: there is no practical mechanism that would work Proposed solution: randomized mechanism
J. Halpern and V. Teague. Rational Secret Sharing and Multi-Party Computation.STOC 2004
Intrusion Detection Systems
Subsystem 1
Subsystem 2
Subsystem 3
Attacker
Players: Attacker and IDSStrategies for attacker: which subsystem(s) to attackStrategies for defender: how to distribute the defense mechanismsPayoff functions: based on value of subsystems + protection effort
T. Alpcan and T. Basar, “A Game Theoretic Approach to Decision and Analysis in Network Intrusion Detection”, IEEE CDC 2003
Two detailed examples
• Revocation Games in Ephemeral Networks [CCS 2008]
• On Non-Cooperative Location Privacy: A Game-Theoretic Analysis [CCS 2009]
14
Misbehavior in Ad Hoc Networks
• Packet forwarding• Routing
AM
B
• Large scale• High mobility• Data dissemination
15
Traditional ad hoc networks Ephemeral networks
Reputation systems ? Solution to misbehavior:
Reputation vs. Local Revocation
• Reputation systems:– Often coupled with routing/forwarding– Require long-term monitoring– Keep the misbehaving nodes in the system
• Local Revocation– Fast and clear-cut reaction to misbehavior– Reported to the credential issuer– Can be repudiated
16
Tools of the Revocation Trade
• Wait for:– Credential expiration– Central revocation
• Vote with:– Fixed number of votes– Fixed fraction of nodes (e.g., majority)
• Suicide:– Both the accusing and accused nodes are revoked
Which tool to use?17
How much does it cost?
• Nodes are selfish• Revocation costs• Attacks cause damage
How to avoid the free rider problem?
Game theory can help:models situations where the decisions of players affect each other
18
Example: VANET
• CA pre-establishes credentials offline
• Each node has multiple changing pseudonyms
• Pseudonyms are costly
• Fraction of detectors =
19
dp
Revocation Game
• Key principle: Revoke only costly attackers• Strategies:
– Abstain (A)– Vote (V): votes are needed– Self-sacrifice (S)
• benign nodes, including detectors• attackers• Dynamic (sequential) game
n
dp NN
M
20
Game with fixed costs1
3
2
A V
VS
S
A
3
2
VSA
3
VSAVSAVSA
( , , )c c c (0,0, 1)
( , , )c c v c
(0, 1,0)
( , , )c v c c (0, , 1)v
(0, , )v v
( 1,0,0)
( , 1,0)v ( , ,0)v v
( ,0, )v v
( ,0, 1)v ( , , )v c c c
Cost of abstaining
Cost of self-sacrifice
Cost of voting
All costs are in keys/message 21
A: AbstainS: Self-sacrificeV: Vote
Assumptions: c > 1
1
3
2
A V
VS
S
A
3
2
VSA
3
VSAVSAVSA
( , , )c c c (0,0, 1)
( , , )c c v c
(0, 1,0)
( , , )c v c c (0, , 1)v
(0, , )v v
( 1,0,0)
( , 1,0)v ( , ,0)v v
( ,0, )v v
( ,0, 1)v ( , , )v c c c
Equilibrium
Game with fixed costs: Example 1
22
Back
war
d in
ducti
on
Assumptions: v < c < 1, n = 2
1
3
2
A V
VS
S
A
3
2
VSA
3
VSAVSAVSA
( , , )c c c (0,0, 1)
( , , )c c v c
(0, 1,0)
( , , )c v c c (0, , 1)v
(0, , )v v
( 1,0,0)
( , 1,0)v ( , ,0)v v
( ,0, )v v
( ,0, 1)v ( , , )v c c c
Equilibrium
Game with fixed costs: Example 2
23
Theorem 1: For any given values of ni, nr, v, and c, the strategy of player i that results in a subgame-perfect equilibrium is:
Theorem 1: For any given values of ni, nr, v, and c, the strategy of player i that results in a subgame-perfect equilibrium is:
ni = Number of remaining nodes that can participate in the game
nr = Number of remaining votes that is required to revoke
Game with fixed costs: Equilibrium
Revocation is left to the end, doesn’t work in practice24
Game with variable costs
S
( 1,0,0)
1
2
A V
V
3
2
SA
S
2 2 2( , , 1 )c c c
1 1 1( , 1 , )c c c 1 1 1( , , )v c v c c
, lim , j jj
c j c v
25Number of stages Attack damage
Theorem 2: For any given values of ni, nr, v, and δ, the strategy of player i that results in a subgame-perfect equilibrium is:
Theorem 2: For any given values of ni, nr, v, and δ, the strategy of player i that results in a subgame-perfect equilibrium is:
Game with variable costs: Equilibrium
Revocation has to be quick
26
Optimal number of voters
• Minimize: MC n
n
Duration of attack Abuse by attackers
27
Optimal number of voters
• Minimize: MC n
n
min{ , }opt a dn p p N M
Fraction of active players
Duration of attack Abuse by attackers
28
RevoGame
Estimation of parameters
Choice of strategy
29
Evaluation
• TraNS, ns2, Google Earth, Manhattan
• 303 vehicles, average speed = 50 km/h
• Fraction of detectors • Damage/stage • Cost of voting• False positives• 50 runs, 95 % confidence
intervals
0.8dp
410fpp
0.1 0.02v
30
Revoked attackers
31
Revoked benign nodes
32
Maximum time to revocation
33
Conclusion
• Local revocation is a viable mechanism for handling misbehavior in ephemeral networks
• The choice of revocation strategies should depend on their costs
• RevoGame achieves the elusive tradeoff between different strategies
34
Two detailed examples
• Revocation Games in Ephemeral Networks [CCS 2008]
• On Non-Cooperative Location Privacy: A Game-Theoretic Analysis [CCS 2009]
35
Pervasive Wireless Networks
36
Human sensors
Vehicular networks Mobile Social networks
Personal WiFi bubble
Peer-to-Peer Communications
37
11
MessageMessageIdentifierIdentifier
22
WiFi/Bluetooth enabled
Signature || CertificateSignature || Certificate
Location Privacy Problem
38
1
Passive adversary monitors identifiers used in peer-to-peer communications
10h00: Millenium Park11h00: Art Institute
13h00: Lunch
Previous Work
• Pseudonymity is not enough for location privacy [1, 2]
• Removing pseudonyms is not enough either [3]
Spatio-Temporal correlation of traces
39
MessageMessageIdentifierIdentifier
[1] P. Golle and K. Partridge. On the Anonymity of Home/Work Location Pairs. Pervasive Computing, 2009[2] B. Hoh et al. Enhancing Security & Privacy in Traffic Monitoring Systems. Pervasive Computing, 2006[3] B. Hoh and M. Gruteser. Protecting location privacy through path confusion. SECURECOMM, 2005
PseudonymPseudonym MessageMessage
Location Privacy with Mix Zones
40
Mix zone
22112211
xxyy?
Temporal decorrelation: Change pseudonym
[1] A. Beresford and F. Stajano. Mix Zones: user privacy in location aware services. Percom, 2004
Why should a node participate?
Spatial decorrelation: Remain silent
Mix Zone Privacy Gain
41
( )
| 2 |1
( ) log ( )n t
i d b d bd
A T p p
t- t=T
11
22
xx
yy
B D
( )n t Number of nodes in mix zone
Cost caused by Mix Zones
• Turn off transceiver
• Routing is difficult
• Need new authenticated pseudonyms
42
+
+
=
Problem
Tension between cost and benefit of mix zones
When should nodes change pseudonym?
43
Method
• Game theory– Evaluate strategies– Predict evolutionof security/privacy
• Example– Cryptography– Revocation– Privacy mechanisms
44
Rational BehaviorSelfish optimization
Security protocolsMulti-party computations
Outline
1. User-centric Model
2. Pseudonym Change Game
3. Results
45
Mix Zone Establishment
• In pre-determined regions [1]
• Dynamically [2]– Distributed protocol
46
[1] A. Beresford and F. Stajano. Mix Zones: user privacy in location aware services. PercomW, 2004[2] M. Li et al. Swing and Swap: User-centric approaches towards maximizing location privacy . WPES, 2006
User-Centric Location Privacy Model
Privacy = Ai(T) – PrivacyLoss
47
2t1t
Privacy
Traceable
t
Ai(T1)Ai(T2)
Pros/Cons of user-centric Model
• Pro– Control when/where to protect your privacy
• Con– Misaligned incentives
48
Outline
1. User-centric Model
2. Pseudonym Change Game
3. Results
49
11
22
Assumptions
Pseudonym Change game– Simultaneous decision
– Players want to maximize their payoff
– Consider privacy upperbound Ai(T) = log2(n(t))
50
• Strategy– Cooperate (C) : Change pseudonym– Defect (D): Do not change pseudonym
Game Model
• Players– Mobile nodes in transmission range– There is a game iif
51
( ) 1n t
Pseudonym Change Game
52
t
C
D
C
t1
Silent period
33
11
22
Payoff Function
53
If C&Not alone, thenui = Ai(T)- γ
If C&Alone, thenui = ui
-- γ
If D, thenui = ui
-
ui = privacy - cost
Sequence of Pseudonym Change Games
54
5
6
E2
23
4E1
7
8
9
E3
11
E2E1
1t 2tE3
3tt
ui
Ai(T1)- γ
Ai(T2)- γ
γ
Outline
1. User-centric Model
2. Pseudonym Change Game
3. Results
55
C-GameComplete information
Each player knows the payoff of its opponents
56
2-PlayerC-Game
57
Two pure-strategy Nash Equilibria (NE): (C,C)&(D,D)
One mixed-strategy NE
Best Response Correspondence
58
2 pure-strategy NE
1 mixed-strategy NE
n-PlayerC-Game
• All Defection is always a NE
• A NE with cooperation exists iif there is a group of k users with
59
2log ( ) ik u
TheoremThe static n-player pseudonym changeC-game has at least 1 and at most 2 pure strategy Nash equilibria.
, i in the group of k nodes
C-Game Results
Result 1: high coordination among nodes at NE
• Change pseudonyms only when necessary
• Otherwise defect
60
I-GameIncomplete information
Players don’t know the payoff of their opponents
61
Bayesian Game Theory
Define type of player θi = ui-
62
)( if
Predict action of opponents based on pdf over type
Modeling of the Environment
63
Low privacy
High privacy
Medium privacy
)( if
i
Each curve models the assumed propensity of other nodes to cooperate
• A threshold determines players’ action
• Probability of cooperation is
Threshold Strategy
64
0( ) ( ) ( )
i
i i i i iF Pr f d
tC
Dθi
θi
~
2-Player I-Game Bayesian NE
Find threshold θi* such that
Average utility of cooperation =
Average utility of defection
65
~
66
Result 2: Large cost increases cooperation probability
67
Result 3: Strategies adapt to environment
68
Result 4: A large number of nodes n in the mix zone discourages cooperation
Conclusion on non-cooperative location privacy
• Considered problem of selfishness in location privacy schemes based on multiple pseudonyms
• Results– Non-cooperative behavior reduces the achievable
location privacy– If cost is high, users care about coordination– As the number of players increases, selfish nodes
tend to not cooperate
69