Anonymity

37
Large Scale Threats to Data Anonymity Arvind Narayanan Joint work with Vitaly Shmatikov Kamalika Chaudhuri

Transcript of Anonymity

Page 1: Anonymity

Large Scale Threatsto Data Anonymity

Arvind NarayananJoint work with

Vitaly ShmatikovKamalika Chaudhuri

Page 2: Anonymity

Anonymity is not cryptography

• Small “keyspace” – random guessing succeeds with probability 1/N Natural upper bound on N – the race is over! Guess-and-verify paradigm

• Even quadratic algorithms sometimes feasible!

• Conventional wisdom relied on computational infeasibility

Page 3: Anonymity

The curse of dimensionality

• Too much entropy per record

• How high is high? Try 35,540!

• k-anonymity breaks down

• Nearest neigbhor too far Cinematch beats baseline by 1%!

• Projection to low dimensions loses most of the info

Page 4: Anonymity

Auxiliary information

• Auxiliary information about people very easy to obtain

• Unlinkability of user traces – unaffordable luxury

• Yet linking across databases often disastrous

• Future privacy – linkage of “profile” to identify makes virtual identities impossible

Page 5: Anonymity

Two fallacies

• Identifying vs. non-identifying attributes All attributes are quasi-identifiers! Simply removing record labels is not sufficient

• Perturbation makes attacker’s task harder Note superficial similarity with LPN But non-cryptographic! Reality: re-identification algorithms easily made

noise resilient

Page 6: Anonymity

Interactive protocols

• Severe computational limits Query-execute-analyze cycle

• Utility required may be non-statistical Database may even be non-relational

• Privacy for queries

• Data aggregator not trusted Algorithms in distributed setting not well

developed yet

Page 7: Anonymity

Sad realization #2

• Privacy usually an afterthought (not important until it affects you) Video privacy act example

• Privacy vs. utility: Collect/release the data, ask questions later

Page 8: Anonymity

Sweeney – linking (exact match)

(Anonymous) (Non-anonymous)

Hardly secret

Probably not secret

Page 9: Anonymity

Collaborative filtering: profiles

• Each of N users has a preference vector, or a preference profile

• One attribute for each item

• Goal: mine this database to predict preferences for new items

• Can we release an anonymized database of preference vectors?

Page 10: Anonymity

Movielens – fuzzy match

• Hypothetical investigation

• Frankowski, Cosley, Sen, Terveen, Riedl.

• Anonymized database of movie ratings

• Attacker knows small number of approximate preferences

• Nearest neighbor stats confirmed

Page 11: Anonymity

Netflix – fuzzy match with noise

• Nearest neighbor graph

• Real attack, Narayanan & Shmatikov

• ~ 4 movies → unique re-identification know either ratings or dates approximately one of the data points can be completely wrong

• Found a couple of our friends

• Found a couple of users from IMDb

Page 12: Anonymity

Distance to nearest neighbor

Page 13: Anonymity

Netflix’s take on privacy

Even if, for example, you knew all your own ratings and their dates you probably couldn’t identify them reliably in the data because only a small sample was included (less

than one-tenth of our complete dataset) and that data was subject to perturbation. Of course, since you know all

your own ratings that really isn’t a privacy problem is it?

-- Netflix Prize FAQ

Page 14: Anonymity

Example of deanonymization

Page 15: Anonymity

Netflix – contributions

• Scoring tolerates large amount of noise ∑i Є M ∩M’ [ e-α|ri - ri‘| + c e-β|di - di‘| + Γ ] / log #i

• Verifying deanonymization in absence of oracle [score(max) – score(max2)] / std.dev(score)

• Extract user relationships

Page 16: Anonymity

Netflix customers with distance < 0.15

•Could edges reflect real-life relationships?

•Ratings and dates were ignored

Page 17: Anonymity

Recommenders: stronger attacks

• Do recommendation systems inherently leak profile? No data release!

• Theoretical attacks known Textbook systems Deployed, complex systems

Page 18: Anonymity

Social networks

• Graph of interactions between people Think of phone call graphs

• Different type of profile Non – relational data

Page 19: Anonymity

Backstrom, Dwork, Kleinberg

• Active and passive attacks

• Re-identify nodes touched by malicious edges

• Easy to find graph-structured patterns in large database

Page 20: Anonymity

Narayanan, Chaudhuri

• Tolerates noise

• Several attacks where a user can re-identify own node

• Subgraph isomorphism with several hundred nodes Heuristics involving node labels

• User knows own degree exactly Modern phones store all calls Who deletes email anymore?

Page 21: Anonymity

Finding yourself

• N instances of graph isomorphism• Use isomorphism-invariant signatures

Page 22: Anonymity
Page 23: Anonymity

Propagation of node re-identification

• Surprisingly small number of seeds (6-12)• Large fraction of nodes compromised• Works even when large fraction (say 80%) of

nodes are honest

Page 24: Anonymity
Page 25: Anonymity
Page 26: Anonymity

Propagation – implementation

• Social phishing

• Buddy zoo

• Skype worm

• Online addressbook service

• Competing social network

Page 27: Anonymity

Author identification

• Basically, a solved problem However, most studies use a small set of

authors Not clear how well sample size required scales

• Combine with typing pattern profiling Possibly deanonymize among millions/billions

of users

• Example: oppressive country

Page 28: Anonymity

Genome anonymity

• Rich social network

• ~10^8 bits entropy per record

• Labeled sample compromises privacy of blood relatives

• Crossover happens in precise, elegant way Work on admixing populations

• Story of deanonymization of sperm donor

• Ease of obtaining auxiliary data or anonymous samples

Page 29: Anonymity

Genome and DNA databases

• Hapmap – entire genome

• “Family tree” services

• 1/800 births from “anonymous” sperm donor

Page 30: Anonymity

Hapmap’s take on privacy

The samples are anonymous with regard to individual identity. Samples cannot be connected to individuals, and no personal information is linked to any sample.

As an additional safeguard, more samples were collected from each population than were used, so no

one knows whether any particular person's DNA is included in the study.

Page 31: Anonymity

Trait                   Genes        Chromosome location• Hair/iris color             ASIP                20 q11.2• Hair/iris color             DCT                13 q32• Green/blue iris            EYCL1            19 p13.1-q13.11• Brown/blue iris           EYCL3            15 q11-q15 *• Height                          GH1                17 q22-q24• Height (Laron)             GHR                  5 p13-p12• Brown/blond hair        HCL1              19 p13.1-q13.11• Brown/blond hair        HCL3              15 q11-q15  *• Brown/red hair            HCL2               4 q28-q31• Hair/iris color             HPS1               10 q23.1-23.3• Hair/iris color             HPS2               10 q24.32• Skin&hair color          MC1R             16 q24.3• Height (Marfan)          MFS                15 q21.1• Hair/iris color             MITF                 3 p12.3-14.1• Hair/iris color             MYO5A          15 q21• Ocular albinism           OA1                X p22.3  • Ocular albinism           OA2                X p11.4-p11.23• OcculoCut.Albinism    OCA2              15 q11.2-q12  • Hair/iris color             PMOC               2 p23.3• Hair/iris color             RAB27A         15 q15-21.1• Hair/iris color             SILV                12 q13-q14 • Skin color                    SLC24A5        15 q21.1 A111T dark to light skin• Short Stature                SS                    X&Y p• Hair/iris color             TYR                11 q14-q21• Hair/iris color             TYRP1              9 p23

Page 32: Anonymity

Genotype – phenotype mappings

• The medical community finds genotype → phenotype mappings Mappings being generated “at an explosive

rate”

• But also: [Sweeney02]: Inferring genotype from clinical

phenotype through a knowledge based algorithm

o focuses on pathological phenotypes

Page 33: Anonymity

Underlying social network

Page 34: Anonymity

Big picture

• Attacks against a wide spectrum of rich, high-dimensional datasets

• Can we win the battle? Using technology alone? What if we don’t? Is part of it already lost?

Page 35: Anonymity

Thanks for coming.

Page 36: Anonymity

Current work

• Sweeney – exact match• Movielens – fuzzy match• Netflix – fuzzy match with noise• AOL• BDK07 – match on non-relational data• NC07 – non-relational data with noise• Amazon – fuzzy match with noise on utility oracle• Genome – match based on multiple databases• Genome – phenotype/genotype mapping

Page 37: Anonymity

Future work

• Author identification Combine with typing pattern profiling Oppressive country example

• Genome reidentification based on observables

• Underlying social network

• SAT solver – generic matching