Designing Human Friendly Human Interaction Proofs (HIPs)

26
Designing Human Friendly Human Interaction Proofs (HIPs) Kumar Chellapilla, Kevin Larson, Patrice Simard and Mary Czerwinski Microsoft Research Presented by Shaohua Xie March 22, 2005

description

Designing Human Friendly Human Interaction Proofs (HIPs). Kumar Chellapilla, Kevin Larson, Patrice Simard and Mary Czerwinski Microsoft Research Presented by Shaohua Xie March 22, 2005. OUTLINE. Introduction Definitions User Study I User Study II Conclusion References. Introduction. - PowerPoint PPT Presentation

Transcript of Designing Human Friendly Human Interaction Proofs (HIPs)

Designing Human FriendlyHuman Interaction Proofs

(HIPs)

Kumar Chellapilla, Kevin Larson, Patrice Simard and Mary Czerwinski

Microsoft Research

Presented by Shaohua XieMarch 22, 2005

OUTLINE

Introduction Definitions User Study I User Study II Conclusion References

Introduction

HIPs, or Human Interactive Proofs, are challenges meant to be easily solved by humans, while remaining too hard to be economically solved by computers.

An example character based HIP

Introduction

HIPs are increasingly used to protect services against automatic script attacks.

Mailblocks HIP samples.

Introduction

MSN HIP samples.

Register.com HIP samples.

Introduction

EZ-Gimpy HIP samples.

Introduction

YAHOO! HIP samples.

Ticketmaster HIP samples.

Introduction

Google HIP samples.

OUTLINE

Introduction Definitions User Study I User Study II Conclusion References

Definitions

Plain text => Global Warp

Plain text => Local Warp

=>

=>

Definitions

Translated Text

Level 10

Rotated Text

Level 25

Level 40

Level 15

Level 30

Level 45

DefinitionsScaled Text

Level 20

Level 35

Level 50

OUTLINE

Introduction Definitions User Study I User Study II Conclusion References

User Study I HIPs that only varied on one parameter of

distortion are presented to users.

Accuracy: the percentage of characters correctly recognized.

For the parameter levels tested on plain, translated, rotated or scaled text HIPs, users were at 99% correct or higher.

User Study IGlobal Warp Text

Level 180 Level 270 Level 360

User Study ILocal Warp Text

Level 30 Level 55 Level 80

OUTLINE

Introduction Definitions User Study I User Study II Conclusion References

User Study II

Unidimensional HIPs has been systematically broken, with a success rate of 5% or greater at a rate of 300 attempts per second [2,12].

Arcs and baselines are added to make HIPs very hard for computers to break.

User Study IIThin Arcs that intersect plus baseline

#Arcs: 0 #Arcs: 18 #Arcs: 36

User Study IIThick Arcs that intersect plus baseline

User Study IIThick Arcs that don’t intersect plus baseline

#Arcs: 0 #Arcs: 18 #Arcs: 36

OUTLINE

Introduction Definitions User Study I User Study II Conclusion References

Conclusion Most one-dimensional HIPs are easy for users to

solve. However, there is a significant decrease in human

HIP solution accuracy with the increase of the global or local warping levels.

Accuracy was also quite high across all levels of HIP recognition with thin arcs in the foreground.

Adding intersecting thick arcs caused significant performance decrements, but non-intersecting thick arcs did not.

OUTLINE

Introduction Definitions User Study I User Study II Conclusion References

References1. Simard PY, Szeliski R, Benaloh J, Couvreur J, and Calinov I (2003), “Using

Character Recognition and Segmentation to Tell Computers from Humans,”International Conference on Document Analysis and Recognition (ICDAR), IEEE Computer Society, pp. 418-423, 2003.

2. Chellapilla K., and Simard P., “Using Machine Learning to Break Visual Human Interaction Proofs (HIPs),” Advances in Neural Information Processing Systems 17, Neural Information Processing Systems (NIPS’2004), MIT Press.

3. Turing AM (1950), “Computing Machinery and Intelligence,” Mind, vol. 59, no. 236, pp. 433-460.

4. Von Ahn L, Blum M, and Langford J. (2004) “Telling Computers and Humans Apart (Automatically) or How Lazy Cryptographers do AI.” Comm. of the ACM ,47(2):56-60.

5. First Workshop on Human Interactive Proofs, Palo Alto, CA, January 2002.6. Von Ahn L, Blum M, and Langford J, The Captcha Project.

http://www.captcha.net

References7. Mori G, Malik J (2003), “Recognizing Objects in Adversarial Clutter:

Breaking a Visual CAPTCHA,” Proceedings of the Computer Vision and Pattern Recognition (CVPR) Conference, IEEE Computer Society, vol.1, pages:I-134 - I-141, June 18-20, 2003

8. Chew, M. and Baird, H. S. (2003), “BaffleText: a Human Interactive Proof,” Proc., 10th IS&T/SPIE Document Recognition & Retrieval Conf., Santa Clara, CA, Jan. 22.

9. Simard, P.,Y., Steinkraus, D., Platt, J. (2003) “Best Practice for Convolutional Neural Networks Applied to Visual Document Analysis,” International Conference on Document Analysis and Recognition (ICDAR), IEEE Computer Society, Los Alamitos, pp. 958-962, 2003.

10. Selfridge, O.G. (1959). Pandemonium: A paradigm for learning. In Symposium in the mechanization of thought process (pp.513-526). London: HM Stationery Office.

11. Pelli, D. G., Burns, C. W., Farrell, B., & Moore, D. C, “Identifying letters.” (accepted) Vision Research.

12. Goodman J. and Rounthwaite R., “Stopping Outgoing Spam,” Proc. of the 5th ACM conf. on Electronic commerce, New York, NY. 2004.