Robust Winners and Winner Determination Policies under Candidate Uncertainty JOEL OREN, UNIVERSITY...

15
Robust Winners and Winner Determination Policies under Candidate Uncertainty JOEL OREN, UNIVERSITY OF TORONTO JOINT WORK WITH CRAIG BOUTILIER , JÉRÔME LANG AND HÉCTOR PALACIOS . 1

Transcript of Robust Winners and Winner Determination Policies under Candidate Uncertainty JOEL OREN, UNIVERSITY...

1

Robust Winners and Winner Determination Policies under Candidate UncertaintyJOEL OREN, UNIVERSITY OF TORONTO

JOINT WORK WITH CRAIG BOUTILIER, JÉRÔME LANG AND HÉCTOR PALACIOS.

Motivation – Winner Determination under Candidate Uncertainty

•A committee, with preferences over alternatives: • Prospective projects.• Goals.

•Costly determination of availabilities:•Market research for determining the

feasibility of a project: engineering estimates, surveys, focus groups, etc.

• “Best” alternative depends on available ones.

2

a

b

c

4 voters 3 voters 2 voters

a

b

c

b

c

a

c

a

b

Winnera

?

? ?

c

≻≻ ≻

≻≻

Efficient Querying Policies for Winner Determination•Voters submit votes in advance.

•Query candidates sequentially, until enough is known in order to a determine the winner.

•Example: a wins.

a

b

c

4 voters 3 voters 2 voters

a

b

c

b

c

a

c

a

b

Winner

? ?

3

≻≻ ≻≻ ≻

The Formal Model•A set C of candidates.

•A vector, , of rankings (a preference profile).

•Set is partitioned:1. – a priori known availability.2. – the “unknown” set.

•Each candidate is available with probability .

•Voting rule: is the election winner.ac

Cb

⋮⋮ ⋮

b

3 voters 2 voters

a

c

b

c

a

c

a

b

Y(available)

U(unknown)

4

Querying & Decision Making•At iteration submit query q(x), .

•Information set .

•Initial available set .

•Upon querying candidate :• If available: add to .• If unavailable: remove from .

• – restriction of pref. profile to the candidate set .

•Stop when is -sufficient – no additional querying will change – the “robust” winner.

ac

Cb

b

3 voters 2 voters

a

c

b

c

a

c

a

b

0.5 0.7 0.4

?

𝑄−

a𝑄+¿¿

b

?

5

Computing a Robust Winner•Robust winner: Given , is a robust winner if .

•A related question in voting: [Destructive control by candidate addition] Candidate set , disjoint spoiler set , pref. profile over , candidate , voting rule .• Question: is there a subset , s.t. ?

•Proposition: Candidate is a robust winner there is no destructive control against , where the spoiler set is .

Y Yx 𝑟 (𝑣 (𝑌 ) )=𝑥𝑟 (𝑣 (𝑌∪𝐷 ) )=𝑦

6

Computing a Robust Winner•Proposition: Candidate is a robust winner there is no destructive control against , where the spoiler set is .

•Implication: Pluarlity, Bucklin, ranked pairs – coNP-complete; Copeland, Maximin -polytime tractable.

•Additional results: Checking if is a robust winner for top cycle, uncovered set, and Borda can be done in polynomial time.• Top-cycle & Uncovered set: prove useful criteria for the corresponding majority

graph.

7

The Query Policy•Goal: design a policy for finding correct winner.

•Can be represented by a decision tree.

•Example for the vote profile (plurality):• abcde, abcde, adbec, • bcaed, bcead,• cdeab, cbade, cdbea

a

b

a wins c

c wins

a wins

b

b wins

c

a wins

Ua b c d

𝑄+¿¿

𝑄−

a b

b

c

8

Winner Determination Policies as Trees

•r-Sufficient tree:• Information set at each leaf is -sufficient.• Each leaf is correctly labelled with the winner.

• -- cost of querying candidate/node .

• – expected cost of policy, over dist. of .

a

b

a wins c

c wins

a wins

b

b wins

c

a wins

\{𝑎 ,𝑏}∈ 𝐴

9

•Cost of a tree: .

•For each node – a training set: Possible true underlying sets A, that agree with .• Example 1: • Example 2: .

•Can solve using a dynamic-programming approach.

•Running time: -- computationally heavy.

a

b

a wins c

c wins

a wins

b

b wins

c

a wins

Recursively Finding Optimal Decision Trees

𝑥 𝑦

10

Myopically Constructing Decision Trees•Well-known approach of maximizing information gain at every node until reached pure training sets – leaves (C4.5).

•Mypoic step: query the candidate for the highest “information gain” (decrease in entropy of the training set).

•Running time:

11

Empirical Results• 100 votes, availability probability .

•Dispersion parameter . ( uniform distribution).

•Tested for Plurality, Borda, Copeland.

•Preference distributions drawn i.i.d. from Mallows -distribution: probabilities decrease exponentially with distance from a “reference” ranking.

= 0.3 0.5 0.9MethodPlurality, DP 4.1 3.4 2.7Plurality, Myopic 4.1 3.5 2.8Borda, DP 3.7 2.7 1.7Borda, Myopic 3.7 2.7 1.7

Average cost (# of queries)

12

Empirical Results•Cost decrease as increases – [ less uncertainty about the available candidates set].

•Myopic performed very close to the OPT DP alg.

•Not shown:

•Cost increases with the dispersion parameter – “noisier”/more diverse preferences (not shown).

• -Approximation: stop the recursion when training set is – pure.• For plurality, , , .• For , .

0.3 0.5 0.9MethodPlurality, DP 4.1 3.4 2.7Plurality, Myopic 4.1 3.5 2.8Borda, DP 3.7 2.7 1.7Borda, Myopic 3.7 2.7 1.7

Average cost (# of queries)

13

Additional Results•Query complexity: expected number of queries under a worst-case preference profile.• Result: For Plurality, Borda, and Copeland, worst-case exp. query

complexity is .

•Simplified policies: Assume for all . Then there is a simple iterative query policy that is asymptotically optimal as .

14

Conclusions & Future Directions•A framework for querying candidates under a probabilistic availability model.

•Connections to control of elections.

•Two algorithms for generating decision trees: DP, Myopic.

•Future directions:1. Ways of pruning the decision trees (depend on the voting rules).2. Sample-based methods for reducing training set size.3. Deeper theoretical study of the query complexity.

15