Beware the Downgrading of Secure Electronic MailConformance (DMARC) standard allows domain...

9
Beware the Downgrading of Secure Electronic Mail Oliver Wiese Freie Universität Berlin Berlin, Germany [email protected] Joscha Lausch Freie Universität Berlin Berlin, Germany [email protected] Jakob Bode Freie Universität Berlin Berlin, Germany [email protected] Volker Roth Freie Universität Berlin Berlin, Germany [email protected] ABSTRACT Researchers have taken considerable interest in the usability chal- lenges of end-to-end encryption for electronic mail whereas users seem to put little value into the confidentiality of their mail. On the other hand, users should see value in protection against phishing. Designing mail apps so that they help users resist phishing attacks has received less attention, though. A well-known and widely im- plemented mechanism can be brought to bear on this problem – digital signatures. We investigated contemporary mail apps and found that they make limited use of digital signatures to help users detect phishing mail. Based on the concept of digital signatures we developed and studied an opinionated user interface design that steers users towards safe behaviors when confronted with phishing mail. In a laboratory study with 18 participants we found that the control group was phishable whereas the experimental group remained safe. The laboratory setup has known limitations but turned out to be useful to better understand the challenges in studying e-mail security experimentally. In this paper, we report on our study and lessons learned. KEYWORDS Secure electronic mail, phishing, end-to-end-encryption ACM Reference Format: Oliver Wiese, Joscha Lausch, Jakob Bode, and Volker Roth. 2018. Beware the Downgrading of Secure Electronic Mail. In Proceedings of Workshop on Socio-Technical Aspects in Security and Trust (STAST2018). ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn 1 INTRODUCTION In a post Snowden era, confidentiality is an essential property of secure communication. App developers are responding to this need and feature end-to-end encryption capabilities in their apps more prominently. Some popular messenger apps build their repu- tation on end-to-end encryption, for example, Signal, Telegram and Threema. In contrast, end-to-end encryption for electronic mail Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). STAST2018, December 2018, San Juan, Puerto Rico, USA © 2018 Copyright held by the owner/author(s). ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. https://doi.org/10.1145/nnnnnnn.nnnnnnn is widely implemented but is rarely used even though mail is still broadly used. Beside person-to-person communication, typical use cases of mail are signing up for services, managing login credentials, re- ceiving receipts and coupons, communicating with doctors and other professionals, and business communication [6]. This makes mail an attractive surveillance target of spies and a phishing tar- get of cybercriminals. In the presence of phishing attacks security properties other than confidentiality are becoming more important, that is, integrity, authenticity and non-repudiation. Integrity 1 can be achieved by means of message authentication codes or digital signatures. Authenticity 2 and Non-repudiation 3 can be achieved by means of digital signatures. Phishing is an active attack whereas the confidentiality of mail may be breached by passive attacks if encryption features are absent or badly implemented. In 2005, Garfinkel and Miller studied how Outlook users dealt with active attacks. Their design significantly improved the security of users against phishing but was no panacea. We continue this line of investigation in our paper. Specifically, we study how individuals behave when they can reasonably expect to receive signed mail but receive mail without signatures. This models phishing strategies that seek to circumvent the fact that phishers cannot forge signatures in keys they do not possess. This strategy is independent from any particular cryptographic standard or trust model such as Public Key Infrastructure (PKI) in S/MIME, Web of Trust in PGP, or Trust On First Use (TOFU). In what follows, we detail our threat model, related work and current solutions. We rationalize the design decisions we took, we describe our mail app design and we describe, report and discuss the results of an empirical study we conducted with 18 participants. The study investigated how participants handled our app and how they responded to phishing attempts in a job application scenario. We conclude with remarks on our motivation and what we consider worthwhile future work. 2 THREAT MODEL Current research on the subject of secure mail focuses on protec- tion against passive eavesdropping, for example, by governmental agencies or mail providers. In addition to this threat, mail users are 1 The content is received as sent. 2 The receiver can be convinced that the mail was sent by someone holding a specific private key. 3 The receiver can convince a third party that a given mail was sent by the holder of a specific private key.

Transcript of Beware the Downgrading of Secure Electronic MailConformance (DMARC) standard allows domain...

Page 1: Beware the Downgrading of Secure Electronic MailConformance (DMARC) standard allows domain administra-tors to publish SPF and DKIM policies, which specify how mail that cannot be authenticated

Beware the Downgrading of Secure Electronic MailOliver Wiese

Freie Universität BerlinBerlin, Germany

[email protected]

Joscha LauschFreie Universität Berlin

Berlin, [email protected]

Jakob BodeFreie Universität Berlin

Berlin, [email protected]

Volker RothFreie Universität Berlin

Berlin, [email protected]

ABSTRACTResearchers have taken considerable interest in the usability chal-lenges of end-to-end encryption for electronic mail whereas usersseem to put little value into the confidentiality of their mail. On theother hand, users should see value in protection against phishing.Designing mail apps so that they help users resist phishing attackshas received less attention, though. A well-known and widely im-plemented mechanism can be brought to bear on this problem –digital signatures. We investigated contemporary mail apps andfound that they make limited use of digital signatures to help usersdetect phishing mail. Based on the concept of digital signatureswe developed and studied an opinionated user interface designthat steers users towards safe behaviors when confronted withphishing mail. In a laboratory study with 18 participants we foundthat the control group was phishable whereas the experimentalgroup remained safe. The laboratory setup has known limitationsbut turned out to be useful to better understand the challenges instudying e-mail security experimentally. In this paper, we reporton our study and lessons learned.

KEYWORDSSecure electronic mail, phishing, end-to-end-encryption

ACM Reference Format:Oliver Wiese, Joscha Lausch, Jakob Bode, and Volker Roth. 2018. Bewarethe Downgrading of Secure Electronic Mail. In Proceedings of Workshop onSocio-Technical Aspects in Security and Trust (STAST2018). ACM, New York,NY, USA, 9 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn

1 INTRODUCTIONIn a post Snowden era, confidentiality is an essential propertyof secure communication. App developers are responding to thisneed and feature end-to-end encryption capabilities in their appsmore prominently. Some popular messenger apps build their repu-tation on end-to-end encryption, for example, Signal, Telegram andThreema. In contrast, end-to-end encryption for electronic mail

Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).STAST2018, December 2018, San Juan, Puerto Rico, USA© 2018 Copyright held by the owner/author(s).ACM ISBN 978-x-xxxx-xxxx-x/YY/MM.https://doi.org/10.1145/nnnnnnn.nnnnnnn

is widely implemented but is rarely used even though mail is stillbroadly used.

Beside person-to-person communication, typical use cases ofmail are signing up for services, managing login credentials, re-ceiving receipts and coupons, communicating with doctors andother professionals, and business communication [6]. This makesmail an attractive surveillance target of spies and a phishing tar-get of cybercriminals. In the presence of phishing attacks securityproperties other than confidentiality are becoming more important,that is, integrity, authenticity and non-repudiation. Integrity1 canbe achieved by means of message authentication codes or digitalsignatures. Authenticity2 and Non-repudiation3 can be achievedby means of digital signatures.

Phishing is an active attack whereas the confidentiality of mailmay be breached by passive attacks if encryption features are absentor badly implemented. In 2005, Garfinkel and Miller studied howOutlook users dealt with active attacks. Their design significantlyimproved the security of users against phishing but was no panacea.We continue this line of investigation in our paper. Specifically, westudy how individuals behave when they can reasonably expectto receive signed mail but receive mail without signatures. Thismodels phishing strategies that seek to circumvent the fact thatphishers cannot forge signatures in keys they do not possess. Thisstrategy is independent from any particular cryptographic standardor trust model such as Public Key Infrastructure (PKI) in S/MIME,Web of Trust in PGP, or Trust On First Use (TOFU).

In what follows, we detail our threat model, related work andcurrent solutions. We rationalize the design decisions we took, wedescribe our mail app design and we describe, report and discussthe results of an empirical study we conducted with 18 participants.The study investigated how participants handled our app and howthey responded to phishing attempts in a job application scenario.We conclude with remarks on our motivation and what we considerworthwhile future work.

2 THREAT MODELCurrent research on the subject of secure mail focuses on protec-tion against passive eavesdropping, for example, by governmentalagencies or mail providers. In addition to this threat, mail users are

1The content is received as sent.2The receiver can be convinced that the mail was sent by someone holding a specificprivate key.3The receiver can convince a third party that a given mail was sent by the holder of aspecific private key.

Page 2: Beware the Downgrading of Secure Electronic MailConformance (DMARC) standard allows domain administra-tors to publish SPF and DKIM policies, which specify how mail that cannot be authenticated

STAST2018, December 2018, San Juan, Puerto Rico, USA Oliver Wiese, Joscha Lausch, Jakob Bode, and Volker Roth

exposed to phishing attacks. A phishing attack is an active attack,the adversary impersonates a particular person or organization. Weassume that the user communicated with the impersonated personor organization before. We exclude an attacker who has access tothe mailbox of the impersonated person/organization or user. Inour threat model the adversary can forge the mail address of mailwithout having access to the impersonated mail account.

We assume that the impersonated person or organization signsoutgoing mail and the adversary does not sign the mail. The ad-versary cannot forge a digital signature or message authenticationcode (MAC).

3 STATE OF THE ART AND RELATEDWORK3.1 End-to-End AuthenticationThe headers and bodies of mail are not protected cryptographicallyby default [19, 23]. Consequently the sender address is not a reliableindicator of the sender. Digital signatures or message authenticationcodes support a cryptographically sound verification of the sender,when applied properly. Common mail encryption standards such asS/MIME [29] or PGP [13] support digital signatures. The pertinentkeys can be associated with a mail address. For example, Apple’sMail.app only indicates that amail is signed if themail address in thecertificate of a signer corresponds to the sender field. This requiresa certificate to work. The certificate must be linked to a local trustanchor. Alternatively, local trust settings can be configured and selfsigned certificates can be imported to that effect. This, however,erects insurmountable barriers to laymen users.

Besides Apple’s Mail.app, iOS Mail, Thunderbird and Outlooksupport S/MIME by default and PGP with extensions. All appli-cations distinguish between encrypted and signed mail and useseparate security indicators for encryption and signatures. Neitherof these mail user agents flag unsigned mail or warn users if they re-ceive unsigned mail from a contact who sent signed mail previously.Users have to attend to these potential risks themselves.

3.2 Authentication of Relayed MailBesides client-side countermeasures, the Internet Engineering TaskForce (IETF) specified a variety of complementary mechanisms thatmail service providers can deploy to authenticate the senders ofmail.

(1) The Sender Policy Framework (SPF) standard allows domainadministrators to authorize hosts that are allowed to sendmail in their name by way of DNS records. Mail transferagents can request and use this information to verify whetherreceived mail really originated from an authorized host inthe sender’s domain [18].

(2) The DomainKeys Identified Mail (DKIM) standard allowsdomain administrators to sign outgoing mail. The signingkeys are published in DNS records as well [1]. Mail transferagents can request and use this information to verify thereceived mail really originated in the sender’s domain.

(3) The Domain-Based Message Authentication, Reporting andConformance (DMARC) standard allows domain administra-tors to publish SPF and DKIM policies, which specify howmail that cannot be authenticated should be handled [22].

These standards are meant to assure that a mail address means whatit says. In principle, users should be able to use that information todiscern phishing mail from genuine mail for domains they deal withon a regular basis. The situation is not quite as simple in practicefor at least two reasons.

First, some mail user agents implement “smart addresses”, for ex-ample, Apple Mail. This means that only the display part of the mailaddress is shown to the user. Phishers may leverage this by sendingmail from any legitimate domain (with respect to SPF/DKIM) andby forging merely the display name portion of the sender address.The crucial mailbox address is hidden from them.

Second, the deployment of SPF/DKIM/DMARC is less than com-plete. In 2015, Durumeric et al. [11] reported that the top mailproviders enrolled SPF/DKIM/DMARC but only 47% of all AlexaTop Million mail servers enrolled SPF and only 1.1% specified aDMARC policy. Of all policies, 58% were soft-failures where thereceiving host is asked to accept a mail but mark it as suspicious,for example, as spam, and another 20.3% had a “no policy” pol-icy. Foster et al. [14] reported that most of the 22 most popularmail providers, for example, hotmail.com, gmail.com, yahoo.com,aol.com, gmx.de, and mail.ru, made a DNS lookup for SPF but only10 acted upon it. Foster et al. concluded that senders from Yahoocannot be impersonated when sending mail to Hotmail, GMail andYahoo accounts “and a handful of other providers.” The FederalTrade Commission (FTC) reported in 2017 that 86% of the top busi-nesses with authorized mail hosts are using SPF but only 10% ofthem have soft-or-strict DMARC policies. 4 Small businesses hardlyever employ DMARC according to a 2018 report of the FTC. Thiswas based on a sample of 11 web hosting providers for smallerbusinesses. Only one of the providers implemented SPF by defaultand 10 of them neither integrated a SPF setup during enrollmentnor introduced the technology.

In summary, SPF, DKIM and DMARC help against phishingattacks but are not widely adopted or enforced. In what follows, wefocus on end-to-end countermeasures rather than provider-basedprotection.

3.3 Research on Mail EncryptionGarfinkel and Miller proposed and studied a key continuity man-agement design [15] using Outlook as an example. Their designdistinguished four states of incoming mail by means of colored bor-ders. Yellow indicated that a key and mail address were encounteredfor the first time. Green indicated that a mail was signed and thecombination of key and mail address had been encountered before.Red indicated that the mail was signed but the key differed fromwhat had been associated with that mail address before. Finally,gray indicated that a mail was unsigned but the sender had sentsigned mail before. Garfinkel and Miller found that their designsignificantly improved the security of users against phishing butwas no panacea. In particular, users replied to spoofed mail in thegray state in order to verify whether the sender was able to reply.Users took replies as confirmation of authenticity. Garfinkel andMiller also reported that users tended to be confused about the4See:https://www.ftc.gov/system/files/attachments/press-releases/online-businesses-could-do-more-protect-their-reputations-prevent-consumers-phishing-schemes/email_authentication_staff_perspective_0.pdf

Page 3: Beware the Downgrading of Secure Electronic MailConformance (DMARC) standard allows domain administra-tors to publish SPF and DKIM policies, which specify how mail that cannot be authenticated

Beware the Downgrading of Secure Electronic Mail STAST2018, December 2018, San Juan, Puerto Rico, USA

exact guarantees provided by signing and encryption. However,while their design provided users with information they neededto decide what to do next, their design did not encourage users totake specific actions to resolve suspicions they might have aboutpotentially harmful mail.

In recent studies on private webmail (PWM) based on identity-based encryption, Ruoti et al. investigated encryption of mail butnot the authenticity of mail [24]. PWM informs users that only thereceiver can read a message (confidentiality). Their threat modelfocused on protecting the mail body against eavesdropping [26].Neither integrity nor authenticity played a significant role, phishingand credential theft were not considered in their threat model.In [24], Ruoti et al. contrasted manual and automatic encryption.Study participants had to encrypt and decrypt messages but notverify them. In [25], they compared integrated and depot-basedmailapproaches and amix of both. In their study, participants were askedto send a PIN and a SSN via encrypted mail to another participant.In subsequent interviews, encryption was often mentioned butauthenticity and integrity were not.

A study of Atwater et al. [4] focused on encryption as well. Theirsoftware was based on PGP and a verified key server as a means offinding other public keys. In their application, users were able todisable encryption but it is unclear whether messages were signedor not.

Participants of several lab studies reported that they could notthink of situations in which to encrypt [4, 25]. Gaw et al. and Lerneret al. [16, 21] studied mail encryption in the context of specificsubgroups of users. Gaw et al. [16] interviewed employees of a non-violent, direct action organization and reported that encryptionis used for classified information. Lerner et al. [21] studied mailencryption with a focus on lawyers and journalists. Six of eightlawyers pointed out that attorney-client privilege is an asset. Fourmentioned account hacking as a threat, three mentioned mail spoof-ing, plaintext held by mail providers and government surveillance.Six of seven journalists named protecting sources as an asset butas Lerner et al. discussed this, metadata of mail can already revealindividuals who communicate with a journalist.

In summary, current research on secure mail focuses on protec-tion against eavesdroppers. The authenticity of mail plays a lesserrole if any. The problems that arise from omitting signatures whenthey may be expected was touched only by Garfinkel et al. [15].This problem exists independently of common concerns such as thechoice of key management, specifically the choice between aWebof Trust, PKI, TOFU or perhaps the approach taken by Keybase.5

3.4 Phishing Detection in BrowsersPrevious research shows that phishing detection is difficult for end-users without software support. In lab studies, primed participantsfailed to detect phishing websites [3, 9]. Dhamija et al. [9] studiedhow well users distinguish between a real and a fake website ina laboratory setting. One phishing website fooled more than 90%of the participants. About one decade later, participants were stillvulnerable to phishing in a similar study by Alsharnouby et al. [3].The researchers evaluated anti-phishing toolbars and concludedthat they do not protect users against high-quality phishing attacks.

5see: https://keybase.io/

Some users even ignored the toolbars [30, 31]. In a lab study by Egel-man et al. [12], active browser warnings and indicators protectedusers against phishing attacks better than passive warnings. Laterresearch shows that users do not pay much attention to browserwarnings and security indicators for SSL [2, 28].

Researchers investigated the reaction of end-users to phishingmails. In a study of Jagatic et al. [17] mail address spoofing in-creased the success rate of a phishing attack from 16% (unknownsender) to 72% (friend’s address). Downs et al. [10] studied differentmail response decision strategies and concluded that none of themprotect against well-constructed phishing attacks. In a study byBlythe et al. [7] well-presented mail and logos increase the successrate of an attacker. In contrast to digital signatures, text and logosare forgeable.

4 DESIGN RATIONALEThe primary purpose of digital signatures in the context of mail isto provide assurance that a mail is from the holder of a particularprivate key. Its application to the non-repudiation of a sent mailseems largely irrelevant in practice. We are not aware of a legaldispute that has been settled over the presentation of a signedmail. Mail encryption, on the other hand, is not generally suitedto establish authenticity. Still, anecdotal evidence by Garfinkel etal. [15] and Lausch et al. [20] indicates that users are not necessarilyaware of that. Keeping signatures and encryption separate in theuser interface also opens up the opportunity that users choose toencrypt but not to sign outgoing mail. What they may not realizeis that this potentially renders mail malleable in transit. Therefore,it is sensible to treat encryption and signatures as integral parts ofsecure mail. Consequently, we distinguish three states of receivedmail:

(1) Secure: received mail is encrypted and signed.(2) Insecure: received mail is not encrypted or not signed.(3) Corrupt: cryptographic processing yields an error.

Secure mail is indicated by an icon that symbolizes a closed en-velope. Insecure mail is indicated by a postcard. Corrupt mail isindicated by an envelope that is torn open at the upper right corner.Lausch et al. [20] found that these metaphors were understood wellby users. They strike a compromise between a padlock metaphor(often used in the context of browser security and encryption) andseals (often used for signatures). The torn envelope worked partic-ularly well for indicating error conditions. Padlocks and seals donot offer such a “third state” that lends itself to easy interpretation.Perhaps most important is that the envelope metaphor is unen-cumbered whereas users may have developed preoccupation aboutthe meanings of padlocks and seals. We expect that this makes iteasier to endow the envelope metaphor with a fresh interpretation.Besides, the name electronic mail alone establishes an analogy topaper mail, which immediately suggests envelopes and postcardsas symbols.

When sending mail, we always sign and encrypt together when-ever a key of the receiver is known. If a user composes mail formultiple recipients then we use the envelope indicator if and onlyif a key is known for all recipients. Otherwise, we use the postcardindicator. The rationale is of course that one plaintext mail sufficesto leak the contents of a mail irrespective of how many copies are

Page 4: Beware the Downgrading of Secure Electronic MailConformance (DMARC) standard allows domain administra-tors to publish SPF and DKIM policies, which specify how mail that cannot be authenticated

STAST2018, December 2018, San Juan, Puerto Rico, USA Oliver Wiese, Joscha Lausch, Jakob Bode, and Volker Roth

Figure 1: Shows our tutorial in which we introduce en-velopes and letters. We focus on security implications andavoid technical details.

sent encrypted. When receiving mail, we show a warning if we re-ceive insecure mail from a sender who sent secure mail previously.The warning is designed to remind the user that he or she shouldnot put too much trust into the mail. The warning additionallyrecommends that the user replies with a secure mail. This coversthe case that phishers spoof the mail address of a sender with whomthe receiver has an existing relationship, that is, the receiver hasreceived a genuine signed mail from the genuine sender previously.What we report in this paper are the results of our studies of thiscase.

What we present next are details of the mail app we designed tosupport our study insofar as they are relevant for the interpretationof our study.

5 IMPLEMENTATION OF THE MAIL APPAs a basis for empirical studies on end-to-end protection mecha-nisms for mail we built a functional open source mail app prototypefor iOS6, a deliberate platform choice we made when we appliedfor our project funding. Since Apple’s implementation of Mail onthe iPhone does not support plugins or extensions we had to de-velop our own app. Upon first launch we used a brief tutorial tointroduce users of our app to the intended interpretation of theenvelope and postcard, as shown in Fig. 1. In addition to our iconiclanguage we used the colors green and yellow to increase the visualdistinctiveness of the two cases.

6The source code of the app is publicly available on the university gitlab server:https://git.imp.fu-berlin.de/enzevalos

Figure 2: When tapping on icons, brief explanations of theassociated security states pop-up.

While not being feature-complete yet, the app does supportaccessing regular IMAP mailboxes over TLS. Encryption and signa-tures are implemented on top of PGP7 but most key managementfunctions are missing. PGP keys are stored in the phone’s Keychaindatabase and mail contacts can be added to the phone’s ContactBook. Our app maintains a mapping between contacts and keysinternally.

Mail correspondence is organized by contact and, within con-tacts, by key. Hence, a spoofed mail with a fresh key would appearin the contact’s conversations but separate from the mail associatedwith the genuine key. Our app does not yet support updating keysin a forward-secure fashion. In principle, this can be implementedtransparently for users. However, the specifics of that approach andwhether it is preferable over a more explicit approach still justifiesfurther research in our opinion.

When composing or viewing mail, a colored bar at the top of thescreen indicates the mail’s security state. The bar is either greenwith an envelope icon in the middle, or yellow with a postcard iconin the middle. At any time, users can pop up a brief explanationof the security state by tapping on the icon, as shown in Fig. 2. Abutton within the pop-up offers to take the user to a panel withmore information.

Perhaps the most distinctive feature of our app is the warningthat shows up at the top of an insecure mail that follows a securemail of the same alleged sender, see left of Fig. 3. The warningcautions the user not to trust the insecure mail too much becauseprevious mail form the same sender was secure. Additionally, it

7We use the open source PGP library ObjectivePGP https://github.com/krzyzanowskim/ObjectivePGP

Page 5: Beware the Downgrading of Secure Electronic MailConformance (DMARC) standard allows domain administra-tors to publish SPF and DKIM policies, which specify how mail that cannot be authenticated

Beware the Downgrading of Secure Electronic Mail STAST2018, December 2018, San Juan, Puerto Rico, USA

Figure 3: Participants in group A saw a warning (left) whilegroup B only saw themessage itself (right). The participantssaw a German translation.

recommends that the user contacts the sender of the mail to inquirewhether the mail is authentic. The warning includes a button thatopens a reply mail automatically. Users only need to tweak thetext to their preference and send it. The mail is sent signed andencrypted. The encryption key is the one associated with the mailaddress of the alleged sender. Of course, phishers will not be ableto respond to the inquiry with a secure mail because they lack thenecessary private key.

6 USER STUDY6.1 Method & Study designWe conducted a lab study to test several of the concepts and se-curity indicators of our iOS app. Specifically we wanted to test ifa warning with a proposed action would affect the vulnerabilityof participants to a spear phishing attack. We tested this using anA/B-test. Participants were assigned to one of two groups in analternating fashion. We also measured the usability of the onboard-ing and security features of the app using the System Usability Scale(SUS). Especially users’ perception of the change of state betweensecure and insecure mail was of interest to us.

Some aspects of the study are modeled after a study conductedby Ruoti et al. [26]. The study participants took part in a simulatedapplication procedure for the fictional bank National Citadel. Allapp usage instructions were given in the form of mail from the bank.Participants also had to perform a secondary task in which theyhad to securely transmit credit card information to a roommatenamed Bob.

6.2 ParticipantsWe recruited our participants using flyers distributed on the campusof our university. The flyers solicited participation in a user studyfor a new iOS communication app. We did not disclose the securitycontext of our study in order to avoid self-selection bias for peoplewho are risk-aware or otherwise interested in security. We triedto select a diverse group in terms of age, gender and education ofthe applicants. However, we preferred iOS users because they arealready accustomed to the usage patterns of the operating systemand apps. We expected the results to be more consistent this way.Participants were compensated for their time with 15 €.

We recruited 20 participants for our study. We excluded thedata of two participants from our analysis. The first participantencountered a technical problem that we fixed before we continuedwith our study. A mail that should have been decrypted was not.Hence, the task did not proceed as required. Another subject was aninternational student who claimed to understand the language inwhich we conducted the study. In a post-study interview it becameclear that his language proficiency was not sufficient to follow theinstructions.

Of our participants, 17 were students and one was a researchassistant. Six participants had a background in psychology, three ineducational sciences and three in economics. Biology (1), englishphilology (1), study of literature (1) and political science (1) werethe other subjects. One participant did not answer that question. Weasked the participants to rate their computer proficiency on a scalefrom 1 (beginner) to 5 (expert). Eleven of them rated themselvesas a 4, six of them as a 3 and one of them as a 2. The mean agewas 24.9 years. Five participants were male (27.8 %), fewer than the40.8 % males that are enrolled in the university.

6.3 ProcedureThe study took place between the 6th and the 15th of July, 2017,in a room of our university we reserved for our experiments. Theentire procedure took between 30 minutes and an hour. Participantswore eye-tracking glasses which were calibrated anew for eachparticipant. The app ran on an iPhone 6s Plus. The primary taskwas presented to the participants on a piece of paper. All furtherinstructions came in the form of mail from the company. A writtenintroduction to the secondary task was given to the participantsafter they completed the second task.

• Task 1 The initial mail, which the participants read in thedefault iOS mail app, instructed them to set up our app ac-cording to company policy to secure the information theyhad to send the company in order to get a refund for theirtravel expenses. This task was designed to test the ability ofthe participants to setup the app without further assistance.Subsequently they had to send their first secure mail.• Task 2 The second mail informed the participants that theywere accepted for the job. The mail instructed them to informthe accounting department of their bank account number.This mail was the first secure mail they received. This taskwas designed to test whether the participants were able tosend a new secure mail to a new contact. We also wanted tosee if they checked the security state of mail.

Page 6: Beware the Downgrading of Secure Electronic MailConformance (DMARC) standard allows domain administra-tors to publish SPF and DKIM policies, which specify how mail that cannot be authenticated

STAST2018, December 2018, San Juan, Puerto Rico, USA Oliver Wiese, Joscha Lausch, Jakob Bode, and Volker Roth

• Task 3 The next mail came from Bob, a roommate of thefictional persona Ulli. He asked for credit card details inorder to pay a utilities bill. This mail was insecure. This taskwas designed to show how participants invite a new user toour app. We did not evaluate in detail how the participantsdescribed the process to Bob. We continued as soon as it waspossible for Bob to find the app in the App Store.• Task 4 Bob responds with a secure mail, stating that heinstalled the app. To this mail the participants were expectedto respond with the credit card details.• Task 5 The responding mail from Bob was not secure. Heasked for the verification number from the backside of thecredit card. The previous correspondence was included inthe mail as quotation. This task was designed to see how theparticipants would react to an insecure mail from a knowncontact of whom they had received a secure mail before.Participants from study group A were shown the warningmessage under study, see also Fig. 3.• Task 6 The last mail appeared to come from the HR de-partment of National Citadel but was insecure. It asked theparticipants to resend their bank information because therehad been a problem. This time the given mail address of theaccounting department had a different domain. This taskwas designed to test the behavior of participants when con-fronted with a phishingmail. Participants from group Awereshown the warning again.

6.4 Results6.4.1 Usability. Previous studies [4, 24–26] on secure and us-

able MUAs used the System Usability Scale (SUS) for measuringusability. SUS is broadly used for initial usability evaluations andwas proposed by John Brook in 1996 [8]. Users rate five positiveand five negative statements about an application on a five pointLikert scale. The scores are normalized on a scale from 0 to 100.Our mobile mail application achieved a SUS rating of 75.4 on aver-age with a standard deviation of 11.7. According to the rating ofBangor et al. [5] our app is classified as good in terms of usability.The participants of the group with the warning message awardedthe app a SUS of 74.7 (sd: 14.4) while the group without warningsawarded a SUS of 76.1 (sd: 9.2).

All participants were able to finish all tasks and were able tocompose and read mail. After testing the app and finishing the taskssome participants mentioned that they missed a folder for draft andsent mail to refer to previous outgoing mail. During the study halfof the participants used the Apple Mail.app to check for new mail.

We minimized the information about end-to-end encryption weprovided to participants during the tutorial and the introduction.Likely for that reason all except one participant read all four intro-duction screens. Participants spent about 35 seconds on average onthe introduction with a standard deviation of 15 seconds. Duringthe experiment, 13 of 18 participants clicked on the icons (envelopeand postcard) for more information and 3.8 times on average. Thepop-up provided superficial information, guidelines about end-to-end encryption and an additional button for more information. Nineparticipants checked the pop-up for more information.

After reading Bob’s initial mail, three participants tapped an-swer to compose a reply and then clicked on the postcard icon.In the resulting pop-up, they found the invitation button for in-secure contacts and invited Bob. Seven participants investigatedthe contact information page of Bob and tapped on the invitationbutton. Three of ten participants modified the general invitationmail and referred in it to the previous mail of Bob. Additionally,one participant composed a separate invitation mail after havingsent the general invitation mail. Eight participants composed aninvitation mail of their own.

6.4.2 Security. In our experiment, users made the followingsecurity decisions when receiving insecure mail. Table 1 summa-rizes the results. When Bob forgot to secure (encrypt and sign) hisresponse, all participants sent the data securely. Six of nine partici-pants in the group with warnings followed the recommendationand asked Bob to verify the previous mail before sending the data.We observed no similar behavior in the group without warnings.

In the case of a phishing attack all participants in the groupwith warnings asked the sender to verify the previous message.All except one clicked on the button in the warning and sent themessage without modification. One participant composed his ownmessage and asked for a verification. All except one waited for aresponse and looked at the inbox, but one investigated the contactinformation page and read the phishing mail again. Participants’scan paths indicate that most of them read the warning as well asthe message (See figure 4a). But none of them looked at the securityindicator. The time to handle the phishing mail was 56.9 secondson average and the participant spent 17.5s reading the phishingmail and warning on average.

In the group without warning, we observed three different reac-tions with three participants each. Three participants immediatelyreplied and sent the sensitive information to the adversary. Two ofthem read the postcard pop-up while composing the response andone of them read the postcard pop-up when reading the phishingmail. None of them inspected previous received mail of the sender.

Their scan paths indicate that they carefully read the phishingmail and looked at the security indicator, sender and receiver fields(See figure 4d). The time to handle the phishing mail was 170s onaverage. They spent 32.56 seconds reading the first mail on averageand about 135.66 seconds composing the response on average.

Three other participants invited the adversary, similar to theinvitation of Bob in a previous task. In two cases, the adversarycould not accept the invitation because the mail provider blockedthe account. In one case the attacker accepted the invitation andresponded with a secure mail. This way, one participant sent theinformation „securely” to the attacker. All of them read the post-card pop-up while composing a response and used the providedinvitation function. One of them inspected the secure and insecurecontact information view of the impersonated sender. None of theminspected previously received mail of the sender. Their scan pathsindicate that most of them checked the security indicator, senderand receiver fields (see figure 4c). They spent 17.8 seconds read-ing the phishing mail on average and finished the task after 95.2seconds on average.

The remaining three participants of the group without warningswere safe and sent an encrypted response to one of the previous

Page 7: Beware the Downgrading of Secure Electronic MailConformance (DMARC) standard allows domain administra-tors to publish SPF and DKIM policies, which specify how mail that cannot be authenticated

Beware the Downgrading of Secure Electronic Mail STAST2018, December 2018, San Juan, Puerto Rico, USA

(a) Warning group (b) Participants who stayedsafe

(c) Participants who invitedthe phisher

(d) Phished participants

Figure 4: Scan paths of participants while reading the phishing mail. Figure 4a shows the scan paths of all participants in thisgroup. The other figures show the scan paths of subgroups of group B (without warning).

mail addresses of the company. Their scan paths indicate that theyonly read the phishing mail briefly and did not focus on the securityindicator (see figure 4b). They spent 241.2 seconds finishing thetask on average but only 14.2 seconds on average reading the mailfor the first time. In contrast to other participants, they spent 12.9second reading previous secure mails on average. They read thephishing mail at least two times. Before sending a secure mail theyaborted composing an insecure mail.

We found a significant difference (p = 0.041 < 0.05) of successfulphishing attacks between warning and no-warning group usingone-tailed fisher’s exact test. All but three participants stated in thesurvey after the experiment, that they were able to discern betweenthe two security states. Two were not certain. Five participantssaid they noticed the different color themes besides the securityindicators. One said that he maybe noticed it subconsciously. Twosaid they had seen the different colors, but didn’t associate themwith the security state.

6.5 Discussion6.5.1 Usability. Our app received a SUS rating of 75.4 on average.

Table 2 compares its SUS to the scores of mail apps studied previ-ously. The SUS ratings of the warning group and the no-warninggroup differ but the standard deviations suggest that the measureddifference is not indicative of a difference between the conditions.Post-condition interviews we conducted with the participants sug-gest that the usability hurdles our participants encountered were

task actions A B

Bob forgot send data secure 9 9send invitation 0 0

to encrypt read previous mail before reply 0 0ask sender 6 0

Phishing attack

send data insecure 0 4send invitation 0 3read previous mail before reply 0 6send data secure 0 3ask sender 9 0

Table 1: Participants’ reactions after receiving an insecuremailwere a secure onewould be expected. In groupAawarn-ing called extra attention to the security state of themail andoffered a proposed action.

not caused by the security features but by missing mail manage-ment functions. This speaks in favor of extending an existing mailapp rather than developing a new one from scratch because existingapps likely have a more complete feature set and users are likelyalready familiar with the interface.

Five participants did not realize that our app is a regular MUA.They believed that all mail sent and received with it were secure,comparable to messenger apps such as Signal or WhatsApp. Oneparticipant thought all mails were secure because our app ran onan iPhone and everything from Apple was secure.

Page 8: Beware the Downgrading of Secure Electronic MailConformance (DMARC) standard allows domain administra-tors to publish SPF and DKIM policies, which specify how mail that cannot be authenticated

STAST2018, December 2018, San Juan, Puerto Rico, USA Oliver Wiese, Joscha Lausch, Jakob Bode, and Volker Roth

SUS scorescheme type mean stdPWM 2.0 [26] gmail web plugin 80 9.2Our Prototype mobile mail client 75.4 11.7integrated MP [4] browser plugin 75 14standalone MP [4] browser plugin 74 13PWM [25] gmail web plugin 72.7 16.5Virtru [25] gmail web plugin 72.3 13.7Tutanota [25] browser plugin 52.2 17.8

Table 2: Our mail client (bold) compared to previous studiedsecure mail clients.

On opening of the app for the first time, four pages tutored userson the possible security states and their properties. The participantsspent on average 35 seconds (SD 15 seconds) on the tutorial. Eventhough the eye tracker data showed that all but one read the text,many could not actually remember the information. Two said thatit would be useful to be able to get back to the information later.

6.5.2 Security. Participants in the warning group followed therecommendation and asked the assumed sender to verify the mes-sage. It seems that they were confident afterwards that the taskwas finished and they waited for a response.

Participants in the other group became unsure how to react.Most of them were bewildered by the phishing mail. They read themail again, checked the security indicator, viewed previous mail(secure mail and the initial insecure one) or checked the contactinformation page. After considerable time, compared to the warninggroup, they fell back to learned or previous behavior. In the bestcase, they answered only to the known mail address and were safe.The invitation of the attacker may be considered behavior theylearned from having invited Bob earlier. The analysis of the eye-tracker data showed how the participants looked for a solution totheir problem: They would have to send an insecure mail containingsensitive information. They did not actually realize that the requestfor information was not legitimate.

This indicates that they did not interpret the situation correctlyand tried to keep the information safe during transport despite thelack of trust in the receiver. The last group gave the informationimmediately to the phisher. They either did not grasp the situationor had no notion of how to solve the problem. Oddly, no participantin this group asked the sender to verify the previous message. Onereason could be that the participants stuck to previously learnedbehavior and were unable to develop a new strategy in this newsituation.

6.6 LimitationsOur study clearly suffers from the limitations that are usually as-sociated with short-term laboratory studies in university settings.For example, habituation to warnings is a known effect in practicethat we could not address properly because of the short durationof the experiment. Therefore, we cannot tell whether users of ourapp would eventually habituate to our warnings.

Furthermore, participants may misunderstand warnings as in-structions of the experimenter. In order to counteract this effect

we made the first warning occur in a false positive case so thatparticipants have an opportunity to learn that warnings are merelywarnings and not instructions on how to respond in a fashion thatpleases the experimenter.

We recruited participants on campus and hence the participantsreflect only a limited portion of the population. There remains a po-tential self-selection bias and a clear (albeit intended) bias towardsiOS users. Most importantly, the number of participants was small.Therefore, we cannot expect our results to generalize to a largerpopulation. Since we both designed and studied the applicationthat was front and center of the study task we expected a risk ofunconsciously influencing study participants. We tried to counterthis risk by scripting the interactions we gave to participants andby providing them information and instructions in written formwhenever possible.

Participants in the no-warning group took more time to react tophishing mail. The reason might have been that they were unsurehow to accomplish the security goals of the task. On the other hand,they might have been merely unsure about how to proceed withthe task itself since there were no specific instructions on whatto do in the case of a suspected phishing (a subtle yet importantdifference).

A typical challenge of studying security mechanisms is thatparticipants know or assume that any threats or risks to whichthey are exposed in a study context are not real. Consequently,participants may accept greater risks in simulated settings thanthey do in a real setting [27]. We have not come to a conclusion yethow to best address this limitation in agreement with basic ethicalprinciples. Participants knew that they took no personal risk inour experiment and they needed to imagine to be in that particularsituation. Furthermore, we did not simulate realistic mail traffic andrealistic response times. This of course limits the ecological validityof our experiment and should be considered in future experiments.

We focused on a specific phishing scenario. Additionally, morecomplex cases need to be investigated, for example, multiple keysand mail addresses per contact as well as rolling keys over in afashion that is forward-secure. Only then will we have a morecomplete picture of the kinds of trade-offs between usability andsecurity we can achieve.

Our study uses a scenario in which it is plausible that users caninquire with assumed senders whether a mail received previouslyis indeed authentic. In other scenarios, for example, the PayPalscenario we mentioned in our introduction, this seems less feasible.On the other hand, it is implausible that a company such as PayPalsends unprotected mail subsequent to adopting signed mail.

7 CONCLUSIONS AND FUTUREWORKMaking end-to-end encryption of electronic mail easy to use to thepoint where it becomes ubiquitous has been a goal of researchersand activists for a long time. And yet the goal seems elusive. DespiteSnowden’s revelations, apparently few users see sufficient valuein encrypting their mail to make the necessary effort. Approachesthat reduce the complexity of widely available encryption tools donot seem to resonate with the vendors who would need to embracethem. At the same time, messenger apps lead the way to end-to-end encryption, leaving mail behind. Perhaps protection against

Page 9: Beware the Downgrading of Secure Electronic MailConformance (DMARC) standard allows domain administra-tors to publish SPF and DKIM policies, which specify how mail that cannot be authenticated

Beware the Downgrading of Secure Electronic Mail STAST2018, December 2018, San Juan, Puerto Rico, USA

phishing in areas where mail is important would be suitable toentice vendors and users to embrace signatures as a protectionmechanism and end-to-end encryption could be deployed alongwith it.

In order to investigatewhether signatures can be effective againstphishing we conducted a study with 18 participants. We studiedwhether recommending safe actions in response to suspicious mailhelps users to steer clear of being phished. Specifically, we lookedat the case where phishers spoof plaintext mail when a genuinesender would be capable of using signatures to authenticate mail(compared to no recommendation). Users responded positively toour recommendations and avoided being phished in all cases. Theapp we designed to provide end-to-end protection was rated quiteusable despite a lack of mail management features. This gives ushope that opportunities exist to improve mail experience to thepoint where mail can compete with messenger apps again for per-sonal communication. One of the greatest strengths of mail is thatit is a provider-independent standard whereas popular messengersare proprietary walled gardens.

More research is necessary to investigate how users might be-have in various edge cases we have not covered in our study, andhow users can be supported in these edge cases. However, we areconvinced that if messengers can succeed with end-to-end protec-tion then so can good old mail.

8 ACKNOWLEDGMENTSThe authors would like to thank the anonymous reviewers for theirhelpful comments and Isabella Bargilly for her help with proof read-ing. The first three authors were funded by the Bundesministeriumfür Bildung und Forschung (Federal Ministry of Education andResearch, Germany) under grant number 16KIS0360K (Enzevalos).

REFERENCES[1] Eric P. Allman, Jon Callas, Jim Fenton, Miles Libbey, Michael Thomas, and Mark

Delany. DomainKeys Identified Mail (DKIM) Signatures. RFC 4871, May 2007.URL https://rfc-editor.org/rfc/rfc4871.txt.

[2] Hazim Almuhimedi, Adrienne Porter Felt, Robert W. Reeder, and Sunny Consolvo.Your Reputation Precedes You: History, Reputation, and the Chrome MalwareWarning. In Symposium on Usable Privacy and Security (SOUPS 2014), SOUPS ’14,pages 113–128. USENIX Association, 2014. ISBN 978-1-931971-13-3.

[3] Mohamed Alsharnouby, Furkan Alaca, and Sonia Chiasson. Why phishing stillworks: User strategies for combating phishing attacks. International Journal ofHuman-Computer Studies, 82(Supplement C):69 – 82, 2015. ISSN 1071-5819.

[4] Erinn Atwater, Cecylia Bocovich, Urs Hengartner, Ed Lank, and Ian Goldberg.Leading Johnny to Water: Designing for Usability and Trust. In Eleventh Sympo-sium On Usable Privacy and Security (SOUPS 2015), pages 69–88, Ottawa, 2015.USENIX Association. ISBN 978-1-931971-249.

[5] Aaron Bangor, Philip Kortum, and James Miller. Determining What IndividualSUS Scores Mean: Adding an Adjective Rating Scale. Journal of Usability Studies,4(3):114–123, 2009.

[6] Frank Bentley, Nediyana Daskalova, and Nazanin Andalibi. “If a Person isEmailing You, It Just Doesn’T Make Sense”: Exploring Changing ConsumerBehaviors in Email. In Proc. Conference on Human Factors in Computing Systems,CHI, pages 85–95. ACM, 2017. ISBN 978-1-4503-4655-9.

[7] Mark Blythe, Helen Petrie, and John A. Clark. F for Fake: Four Studies on HowWe Fall for Phish. In Proceedings of the SIGCHI Conference on Human Factors inComputing Systems, CHI ’11, pages 3469–3478, New York, NY, USA, 2011. ACM.ISBN 978-1-4503-0228-9.

[8] John Brooke. SUS - A quick and dirty usability scale. Usability evaluation inindustry, 189(194):4–7, 1996. ISSN 1097-0193.

[9] Rachna Dhamija, J. D. Tygar, and Marti Hearst. Why Phishing Works. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems,CHI ’06, pages 581–590, New York, NY, USA, 2006. ACM.

[10] Julie S. Downs, Mandy B. Holbrook, and Lorrie Faith Cranor. Decision Strategiesand Susceptibility to Phishing. In Proceedings of the Second Symposium on Usable

Privacy and Security, SOUPS ’06, pages 79–90, New York, NY, USA, 2006. ACM.ISBN 1-59593-448-0.

[11] Zakir Durumeric, David Adrian, Ariana Mirian, James Kasten, Elie Bursztein,Nicolas Lidzborski, Kurt Thomas, Vijay Eranti, Michael Bailey, and J. Alex Hal-derman. Neither Snow Nor Rain Nor MITM...: An Empirical Analysis of EmailDelivery Security. In Proc. Internet Measurement Conference, IMC, pages 27–39.ACM, 2015. ISBN 978-1-4503-3848-6.

[12] Serge Egelman, Lorrie Faith Cranor, and Jason Hong. You’Ve Been Warned: AnEmpirical Study of the Effectiveness of Web Browser Phishing Warnings. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI’08, pages 1065–1074, New York, NY, USA, 2008. ACM. ISBN 978-1-60558-011-1.

[13] Hal Finney, Lutz Donnerhacke, Jon Callas, Rodney L. Thayer, and David Shaw.OpenPGP Message Format. RFC 4880, November 2007. URL https://rfc-editor.org/rfc/rfc4880.txt.

[14] Ian D. Foster, Jon Larson, Max Masich, Alex C. Snoeren, Stefan Savage, and KirillLevchenko. Security by Any Other Name: On the Effectiveness of Provider BasedEmail Security. In Proc. Conference on Computer and Communications Security,CCS, pages 450–464. ACM, 2015. ISBN 978-1-4503-3832-5.

[15] Simson L. Garfinkel and Robert C. Miller. Johnny 2: A User Test of Key ContinuityManagement with S/MIME and Outlook Express. In Proceedings of the 2005symposium on Usable privacy and security, pages 13–24. ACM, 2005.

[16] Shirley Gaw, Edward W. Felten, and Patricia Fernandez-Kelly. Secrecy, Flagging,and Paranoia: Adoption Criteria in Encrypted Email. In Proceedings of the SIGCHIConference on Human Factors in Computing Systems, CHI ’06, pages 591–600, NewYork, NY, USA, 2006. ACM. ISBN 1-59593-372-7.

[17] Tom N. Jagatic, Nathaniel A. Johnson, Markus Jakobsson, and Filippo Menczer.Social Phishing. Commun. ACM, 50(10):94–100, October 2007. ISSN 0001-0782.

[18] D. Scott Kitterman. Sender Policy Framework (SPF) for Authorizing Use ofDomains in Email, Version 1. RFC 7208, April 2014. URL https://rfc-editor.org/rfc/rfc7208.txt.

[19] Dr. John C. Klensin. Simple Mail Transfer Protocol. RFC 5321, October 2008.URL https://rfc-editor.org/rfc/rfc5321.txt.

[20] Joscha Lausch, Oliver Wiese, and Volker Roth. What is a secure email? EuroUSEC’17, 2017.

[21] Adam Lerner, Eric Zeng, and Franziska Roesner. Confidante: Usable EncryptedEmail: A Case Study with Lawyers and Journalists. In 2017 IEEE EuropeanSymposium on Security and Privacy, EuroS&P 2017, Paris, France, April 26-28, 2017,pages 385–400, 2017.

[22] Franck Martin, Eliot Lear, Tim Draegen, Elizabeth Zwicky, and Kurt Andersen.Interoperability Issues between Domain-Based Message Authentication, Report-ing, and Conformance (DMARC) and Indirect Email Flows. RFC 7960, September2016. URL https://rfc-editor.org/rfc/rfc7960.txt.

[23] Pete Resnick. Internet Message Format. RFC 5322, October 2008. URL https://rfc-editor.org/rfc/rfc5322.txt.

[24] Scott Ruoti, Nathan Kim, Ben Burgon, Timothy van der Horst, and Kent Seamons.Confused Johnny: When Automatic Encryption Leads to Confusion and Mistakes.In Proceedings of the Ninth Symposium on Usable Privacy and Security, SOUPS’13, pages 5:1–5:12, New York, NY, USA, 2013. ACM. ISBN 978-1-4503-2319-2.

[25] Scott Ruoti, Jeff Andersen, Scott Heidbrink, Mark O’Neill, Elham Vaziripour,Justin Wu, Daniel Zappala, and Kent Seamons. "We’re on the Same Page": AUsability Study of Secure Email Using Pairs of Novice Users. In Proceedings ofthe 2016 CHI Conference on Human Factors in Computing Systems, CHI ’16, pages4298–4308, New York, NY, USA, 2016. ACM.

[26] Scott Ruoti, Jeff Andersen, Travis Hendershot, Daniel Zappala, and Kent Seamons.Private Webmail 2.0: Simple and Easy-to-Use Secure Email. In Proceedings ofthe 29th Annual Symposium on User Interface Software and Technology, UIST ’16,pages 461–472, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4189-9.

[27] Stuart E. Schechter, Rachna Dhamija, Andy Ozment, and Ian Fischer. Theemperor’s new security indicators. In Proceedings of the 2007 IEEE Sympo-sium on Security and Privacy, SP ’07, pages 51–65, Washington, DC, USA, 2007.IEEE Computer Society. ISBN 0-7695-2848-1. doi: 10.1109/SP.2007.35. URLhttps://doi.org/10.1109/SP.2007.35.

[28] Joshua Sunshine, Serge Egelman, Hazim Almuhimedi, Neha Atri, and Lorrie FaithCranor. Crying Wolf: An Empirical Study of SSL Warning Effectiveness. InProceedings of the 18th Conference on USENIX Security Symposium, SSYM’09,pages 399–416, Berkeley, CA, USA, 2009. USENIX Association.

[29] Sean Turner and Blake C. Ramsdell. Secure/Multipurpose Internet Mail Exten-sions (S/MIME) Version 3.2 Message Specification. RFC 5751, January 2010. URLhttps://rfc-editor.org/rfc/rfc5751.txt.

[30] MinWu, Robert C. Miller, and Simson L. Garfinkel. Do Security Toolbars ActuallyPrevent Phishing Attacks? In Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems, CHI ’06, pages 601–610, New York, NY, USA, 2006.ACM. ISBN 1-59593-372-7.

[31] Yue Zhang, Serge Egelman, Lorrie Cranor, and Jason Hong. Phinding phish:Evaluating anti-phishing tools. In Proceedings of the 14th Annual Network andDistributed System Security Symposium (NDSS 2007). Internet Society, 2007.