Attestation Transparency: Building secure Internet ... · Attestation Transparency: Building secure...

15
Attestation Transparency: Building secure Internet services for legacy clients Jethro Beekman John Manferdelli David Wagner Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2016-12 http://www.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-12.html March 15, 2016

Transcript of Attestation Transparency: Building secure Internet ... · Attestation Transparency: Building secure...

Attestation Transparency: Building secure Internet

services for legacy clients

Jethro BeekmanJohn ManferdelliDavid Wagner

Electrical Engineering and Computer SciencesUniversity of California at Berkeley

Technical Report No. UCB/EECS-2016-12

http://www.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-12.html

March 15, 2016

Copyright © 2016, by the author(s).All rights reserved.

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission.

Acknowledgement

We thank Jon McCune and our anonymous reviewers for their feedback.This work was supported by Intel through the ISTC for Secure Computing,AFOSR under MURI award FA9550-12-1-0040, and NSF under CCF-0424422.

Attestation Transparency

Building secure Internet services for legacy clients

Jethro G. BeekmanUniversity of California, Berkeley

[email protected]

John L. ManferdelliGoogle

[email protected]

David WagnerUniversity of California, Berkeley

[email protected]

ABSTRACTInternet services can provide a wealth of functionality, yettheir usage raises privacy, security and integrity concernsfor users. This is caused by a lack of guarantees about whatis happening on the server side. As a worst case scenario,the service might be subjected to an insider attack. Weuse remote attestation of the server to obtain guaranteesabout the programming of the service. On top of that, weaugment Certificate Transparency to distribute informationabout which services exist and what they do. Combined,this creates a platform that allows legacy clients to obtainsecurity guarantees about Internet services.

CCS Concepts•Security and privacy → Trusted computing; Soft-ware security engineering; •Computer systems or-ganization → Client-server architectures; •Networks →Cloud computing;

KeywordsRemote attestation; Secure enclaves; Certificate Trans-parency; Cloud computing

1. INTRODUCTIONEnd users increasingly perform important computing ac-

tivities online in the cloud. This is convenient for thembut the guarantees they get about those activities are sig-nificantly reduced from an ideal desktop-computing modelwhere applications are run on trusted machines, inaccessi-ble to adversaries, using software installed and maintainedby knowledgeable trusted personnel known to the end user.On well-managed desktop machines, users can be confidentthat the software they use is the version they expect, withknown behavior and mechanisms to prevent unauthenticatedaccess to their data and unauthorized modification to thesoftware itself.

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected].

ASIA CCS’16, May 30-June 03, 2016, Xi’an, Chinac© 2016 Copyright held by the owner/author(s). Publication rights licensed to ACM.

ISBN 978-1-4503-4233-9/16/05. . . $15.00

DOI: http://dx.doi.org/10.1145/2897845.2897895

Cloud services typically do not provide similar guarantees,which raises privacy, security and integrity concerns. Whowill have access to my data, intentionally or unintentionally?Will the service continue to work properly tomorrow? Canthe service read my data and use it for purposes I didn’thave in mind? Will my data still exist in the same formtomorrow? Could a malicious system administrator at theservice read or modify my data? If a system administrator’scredentials are breached, could an attacker gain access tomy data? For current Internet services, the answers to thesequestions are often unsatisfying.

On the other hand, cloud services provide many benefitsthat desktop users don’t get. Cloud services provide avail-ability through redundancy and replication and they removethe burden of maintenance from the user. In addition, thesheer number of available services seems to be surpassingthe number of applications available on the desktop.

The ideas presented in this paper aim to combine benefitsfrom the cloud-based service model with some of the guar-antees with which desktop computer users are familiar. Wefocus especially on defending against insider attacks. Forinstance, the insider might be a malicious system adminis-trator, a system administrator whose credentials have beencompromised, or even a government order that compels theservice to provide access to user data [15]. In addition, wedefine a policy-based mechanism that allows users to choosewhich security properties they expect in a secure service.

Our basic approach is to use transparency to deter unau-thorized access. We use hardware support for sealed storageto ensure that user data is stored in encrypted form, and notdirectly accessible by insiders. Then, we build mechanismsto ensure that insiders cannot modify or introduce backdoorsinto the software without detection. Our design makes useof recent advances in security technology such as hardware-supported remote attestation and Certificate Transparency.Services store a hash of the software that they are running ina public audit log, and (conceptually) clients verify that theserver is running code that matches a hash in the public au-dit log. This means that insiders cannot make undetectablechanges to the server-side code; at least in principle, anysuch changes can be detected through examination of thepublic audit log. One challenge is how to ensure that theseservers can be used from legacy clients, such as existing webbrowsers. We show how to achieve this goal by building onCertificate Transparency.

We claim the following contributions:Insider attack protection We show how to build In-

ternet services that are unalterable secure services. Such

1

services provide integrity and confidentiality of computationand user data, are secure against insider attacks, and theirfunctionality can be remotely verified.

Policy and update mechanism Although we de-scribe services as unalterable, in fact, updates will oftenbe desirable to fix bugs, improve usability or security oradd functionality. Also, instances of a secure service shouldnot be locked to a single platform. We demonstrate howthese goals can be achieved, without user-visible downtimefor updates, and without compromising security or privacy.Finally, secure services may—under policy control specifiedby a user—be authorized to collaborate with other usersecure services while preserving the security promises; weshow how this can be done as well.

Legacy client support In a manner similar to the wayCertificate Transparency protects against CAs secretly is-suing bad certificates, Attestation Transparency protectsagainst service providers secretly changing the services theyprovide. This transparency provides a public record link-ing domain names to service implementations. Today’s TLSclients immediately reap the benefits of the transparencyframework, except in some cases when they are the victimof a targeted attack involving misissued certificates.

In the following section we review existing building blockswe use to build our framework. We provide a high-leveloverview of the paper in §3. In §4 we discuss the design ofunalterable secure services while §5 describes how we aug-ment Certificate Transparency to allow verification of theseservices. We evaluate our design in §6 and elaborate on po-tential use cases in §7. We then wrap up with related workin §8 and the conclusion (§9).

2. BACKGROUND

2.1 Secure EnclavesWe define a secure enclave as an isolated process, executed

on a platform that provides confidentiality and integrity ofcode and data as well as sealing and attestation. In general,these technologies allow initializing an isolated and perhapsencrypted block of memory with a known program. Accessto application memory is restricted by hardware and exter-nal access to the software is similarly restricted to identifiedentry points into the code. The software loaded in an enclaveis also measured1, allowing the hardware to attest to an-other party that the expected software was properly loadedand initialized and that the enclave software is isolated fromother software running on the computer. The platform alsoprovides a way to encrypt data so that the encrypted datacan only be decrypted by this particular instance of the coderunning on this particular hardware. Different technologiesprovide such secure enclaves, including Intel SGX [4, 27],IBM SecureBlue++ [8], TPM-based Flicker [26], and per-haps ARM Trustzone [38].

2.1.1 Attestation primitiveAttestation is a mechanism that allows software to make

statements that can be verified remotely by communicatingparties, using attested statements. When talking about se-cure enclaves, some hardware-based root of trust, H, will

1A measurement is typically a cryptographic hash of thesoftware as loaded together with any configuration informa-tion which may affect the software behavior.

attest that it is running a program with identity (measure-ment) I. In order for such a program to communicate withthe outside world securely, it will need an encryption key K,and a way to securely announce to the outside world that itcontrols that key. Attestation provides such a mechanism:the hardware makes a statement of the form2

A(I,K) = � H says “H runs I which says

[K speaks for I]”� .

Platforms providing secure enclaves often provide ways foran entity I1 to endorse a particular program with identityI2. For example, I1 might cryptographically sign I2, andthis signature can be verified as part of loading I2. Such anattestation is of the form

A(I1 : I2,K) = � H says “H runs I2 which says

[K speaks for I2]” and “I1 endorses I2”� .

If the platform can not verify the endorsement itself, a sim-ilar statement can still be formed by including the endorse-ment directly, as in

A(I1 : I2,K) = � H says “H runs I2 which says [K

speaks for I2]”� and � I1 says “I1 endorses I2”� .

2.1.2 Sealing and encryption primitivesThe general secure enclave concept does not include secure

persistent storage. This is generally solved by using an un-trusted persistent store and storing data only in encryptedform. This provides a form of secure persistent storage. Wediscuss some limitations of this scheme (such as rollback at-tacks) in §7.2.

Encrypting data in such a way that only a particular in-stance of a secure enclave can access it is called sealing.Generally, this is achieved by using a symmetric encryptionkey derived from both the program and hardware identity.The sealing operation turns the message m into the sealedtext s = Eseal(m), while unsealing is m = Dseal(s).

In this paper we also use authenticated encryption, writtenc = E(K,m) and m = D(K, c).

2.2 Intel Software Guard ExtensionsIntel Software Guard Extensions (SGX) [4, 17, 19, 27] are

a recently announced hardware technology and instructionset extension providing secure enclaves. A special set of in-structions can measure and encrypt a memory region beforetransferring execution control to it. The trusted computingbase of SGX-based secure enclaves encompasses only theprocessor hardware, its microcode and the enclave image it-self. In particular, the operating system is not part of theTCB. Data stored in memory regions belonging to the en-clave is encrypted before it leaves the processor, so that thememory bus is also not part of the TCB. The security ofthis system is predicated on the correct functioning of theprocessor hardware and the SGX instruction set.

2.2.1 AttestationSGX-enabled hardware can generate reports: integrity-

protected statements about the enclave generated by thehardware:

Reportlocal = MAC (Ienclave‖Isigner‖Duser) .

2Following the Taos language [39].

2

The MAC key is different for each processor and private tothe enclave that requested the report—only that enclave onthe same processor can verify the report. Ienclave is the mea-surement of the code of the enclave the report is generatedof and Isigner is the public key that was used to sign that en-clave before loading it. Duser is an arbitrary value that canbe specified by the enclave when requesting the attestationreport. This can be used to bind data to the attestation.

A special secure enclave provided by Intel, called the quot-ing enclave, can replace the MAC with a signature:

Reportremote = Sign (Ienclave‖Isigner‖Duser) .

The signature private key is private to the processor andcannot be used improperly or for any purpose. The corre-sponding public key is published by the vendor, and a thirdparty can use it to verify that the report was created byactual Intel hardware.

2.2.2 Sealed storageA special instruction can generate an enclave-specific seal-

ing key. The key is derived as

Kderived = H (Ienclave‖Kdevice‖ . . .)

where Kdevice is a hardware-embedded secret unique to thisdevice. The enclave can use this key to encrypt data whichcan only be decrypted by the same enclave running the samecode on the same hardware.

A different key can also be derived as Kderived =H (Isigner‖Kdevice‖ . . .). This key can be used to transferdata between enclaves running on the same hardware thatwere signed by the same public key. In this work, we don’tuse this key since it gives too much control to the signer.

2.3 Certificate TransparencyThe Certificate Transparency (CT) framework [22], as the

name implies, aims to provide transparency to the issuanceof TLS certificates. CT makes all legitimate TLS certificatesa matter of public record, making it trivial to identify misis-sued certificates. The framework consists of several parts.

Public append-only logs A CT log server maintainsa log of all certificates submitted to it. The log is structuredas a Merkle tree which allows efficient verification of addi-tions to the log. When submitting a certificate to the logserver, the server will return a Signed Certificate Timestamp(SCT). The SCT is a promise that the server will include thecertificate in the log within a certain time limit, the maxi-mum merge delay. The SCT can be used as proof to otherparties that a certificate is part of the public record.

Monitors A monitor watches one or more CT logservers for suspicious changes. For example, a domain ownermight know that all its certificates are issued by a particularCA. If a certificate for their domain issued by a different CAappears in a log, the monitor raises an alarm. The admin-istrator can then act upon that alarm, e.g., by demandingthe revocation of the phony certificate.

Auditors An auditor watches one or more CT logservers for consistency. It checks that the Merkle tree isupdated consistently and that certificates are included aspromised by SCTs. If it detects any inconsistency, it raisesan alarm. The CT log owner will then need to explain thediscrepancy or risk being shut down.

Browsers Once the CT framework is fully operational,TLS clients such as browsers can demand proof from TLS

servers that the server’s certificate appears in a log. TLSservers can provide this proof in the form of SCTs. If acertificate does not appear in the logs, that is suspicious,and the client can choose to abort the connection attempt.

3. OVERVIEWWe want application service providers on the Internet to

be able to host secure services. These services must be ableto store and handle user data securely. By secure, we meanthat the data’s confidentiality and integrity is preserved, inthe face of attacks within the scope of the threat modelset forth below, which includes insider attacks. While wecannot absolutely prevent insiders from violating security,the transparency mechanism guarantees that such violationswill be publicly detectable.

Users of a secure service must be able to verify that theirclient is connected to a specific service that is known to pro-vide those security properties. Legacy clients must be sup-ported: users must be able to obtain most of the securitybenefits without installing special software. Beyond legacyclients there must be an incremental deployment path. Per-formance loss compared to insecure services should be min-imal. Services must also be updateable, and the securityproperties must be maintained in the update process.

3.1 Threat modelWe assume the server hosting the service uses some secure

enclave technology that prevents the adversary from access-ing the code and data running in the enclave. We allowadversaries all the capabilities of an active network attackeras well as full control over non-enclave software running onthe computer hosting the service (e.g., for SGX enclaves, thisincludes control over the operating system). For instance,an insider might add malicious software, or service providerpersonnel might accidentally misconfigure the service; theseare included in the threat model. The adversary also has theability to run its own servers mimicking real services. Weassume the user’s client is secure and cannot be tamperedwith. The threat model is depicted in Figure 1.

Availability is out of scope for this paper. If a maliciousinsider wishes to destroy all user data or deny access to theservice, they can do so.

3.2 Design overviewThe basic idea is to run all of the service code—including

TLS session establishment, request handling, and storage—in an enclave on the server. This provides isolation andensures that even insiders on the server cannot tamper withthe running code or memory of the service. Also, we usesealed storage to prevent malicious insiders from reading ormodifying data stored by the service on persistent storage:effectively, all data is encrypted before it leaves the enclave.

User

Client Internet

Server

Secureenclave

Admin

Figure 1: Secure service threat model. The shadedareas may be controlled by the adversary.

3

The user connects to the server, using TLS to establish asecure channel between the client and server. We use remoteattestation to allow the user to verify what code is running inthe enclave: secure hardware on the server provides a signedstatement indicating what code has been loaded into theenclave. A fully attestation-aware client would then use thisattestation to verify that the server is running the expectedcode.

We also show how legacy clients can access the server andgain a subset of the security guarantees. Legacy clients alsoconnect to the server via TLS. Conveniently, the TLS pro-tocol is widely supported and provides a secure channel tothe server, while verifying the authenticity of that server.As is usual for TLS, the client checks that the server’s TLScertificate is valid and authenticates the server using thepublic key found in this certificate. Our system extends theguarantees provided by this authentication step by furtherconstraining the use of the private key.

In particular, a secure service runs inside a secure enclaveand it will generate its TLS private key there. The TLSprivate key will never leave the enclave in unencrypted form;it is stored using sealed storage, so that only the enclave canretrieve it. Thus, even insiders cannot learn the service’sTLS private key.

When the service is first created, it publishes its TLS pub-lic key in an attested statement proving that the enclave waslaunched with a certain code and that that code generatedthe key. Legacy clients not built with Attestation Trans-parency in mind won’t be able to verify these attestations,but the key idea of our system is that another party cando so on their behalf. Because the attestations are pub-lic, anyone can check what code the service will run, thatthe code is secure, that it will never reveal its TLS privatekey, and that it protects itself adequately from maliciousinsiders. This allows word to spread through out-of-bandchannels that the service is trustworthy. For instance, anexpert might inspect the code published by good.com anddetermine that it is trustworthy and will never leak its TLSprivate key; inspect the attestation and TLS certificate anddetermine that the TLS keypair was generated by this en-clave, and the public part is in the TLS certificate; and thenspread the word that good.com can be trusted.

Of course, a malicious insider at good.com could alwaystake down the secure service and replace it with maliciouscode, running outside an enclave. An attestation-awareclient could detect this (because the attestation will change),but a legacy client could not. However, this attack is de-tectable. To mount such an attack, the insider would needto generate a TLS keypair and get a new certificate issuedfor it (because legacy clients expect to connect over TLS),and hand the new private key to the malicious code. Thisis detectable because it triggers issuance of a new certificatefor good.com. In particular, we use Certificate Transparencyto detect issuance of new certificates. In our design, secureservices publicly commit to always publish a new attestationany time they update the service or obtain a new certificate.Thus, issuance of a new certificate without a correspondingpublished attestation indicates an attack or error. Crucially,because all of this information is public, anyone can mon-itor the published information and detect these situations,providing transparency.

Because our design focuses on transparency about whatcode the service will run, we call it Attestation Transparency.

……………...

www.

www.

TLS Connection

Secure Enclave

Server

Certificate request &

Attested statement

www.

Certification

Authority

Certificate

Transparency log

Attestation

Transparency log

User

Transparency

log monitor

TLS

Cert

TLS private key

1.

2(a)

2(b)

3.

4.

5(a)

5(b)

Figure 2: Overview of Attestation Transparency. (1)The secure service emits the certificate request and attestedstatement. (2) The attested statement and certificate aresubmitted to the Attestation and Certificate Transparencylogs. (3) The secure service receives the certificate producedby the CA. (4) The user can now establish a regular TLSconnection with the secure service. (5) The transparency logmonitor independently monitors the transparency logs forpossible violations.

It extends Certificate Transparency to allow publishing theseindependently-auditable attestations. Legacy clients canrely on Certificate Transparency to ensure that attacks willbe publicly detectable, while future attestation-aware clientscan verify attestations themselves.

A diagram of the entire system is shown in Figure 2.

3.3 Policy modelTo verify that a secure service will act “as promised”, the

user must verify that (a) the service code correctly imple-ments the intended behavior and (b) no other program onthe same computer will be able to interfere with the opera-tion of the service code or observe its memory. The mecha-nisms described below allow a service to prove to a user whatservice code will execute on the server enclave. Thus, inprinciple, a user could examine all of the code and convincethemselves that it will act “as expected” and will provide allthe desired security guarantees. However, in practice fewusers will be able to do this: code analysis is expensive andbeyond the capabilities of most end users.

To address this challenge, we provide several flexiblemechanisms to enable users to verify that the service codewill meet their needs (§5.3). One option is that the usermay rely on a cryptographically signed statement from thesecure service developer naming both the service identityand the (user comprehensible) promised behavior. Since thedeveloper can produce the implementation corresponding tothe identity, it can be verified by third parties or used asa basis for legal recourse in the case the service does not,in fact, conform to the promised behavior. Alternatively, auser can rely on either an authority (for example, in the case

4

of an enterprise service, the enterprise “IT” department) oran auditor to decide which services are trustworthy. Theauthority cryptographically signs a statement representingthat the service code conforms with expected behavior in allrespects. These can be securely and automatically checkedif so designated. Alternatively, a user may rely on a set ofreviewers of the secure service code who cryptographicallysign such statements. Reviewers might have different mo-tivations, some altruistic and some self-serving. Further, auser may employ policy rules to automatically determineif the specified behaviors are adequate. For example, theuser may insist that the EFF examine the service and cer-tify that it meets a designated privacy policy, and that aconsumer agency or a product reviewer also sign a confor-mance statement, and that the developer be one of a namedset of developers that the user feels comfortable with. Thereare other alternatives that ensure compliance with the user’sneeds and relieve the user of the need to conduct extensivereviews themselves.

Usually, a user will use the same policy model to decidewhether an update meets those same specified needs and,if it does, whether user data accessible to previous versionscan be made available to subsequent versions.

4. SECURE SERVICE DESIGNThis section presents the architecture we use to implement

unalterable secure services. Unalterable here means that thefunctionality of the service cannot be changed. This allowsa client of the service to view the service as an extension ofthe client itself and not just a third-party program subjectto the whims of another entity.

Our architecture runs services inside a secure enclave aswell as to encourage secure software development. To reducethe attack surface, the architecture presents a limited inter-face to the programmer that should be sufficient for Internetservices, and the interface is implemented in a memory-safeand type-safe language, Rust.3

4.1 Secure service interfaceIn our architecture (Figure 3), only the CPU and the code

inside the secure enclave are trusted. The secure enclavehas no input/output capabilities and relies on an untrusteddriver for (insecure) access to the outside world. to commu-nicate with the outside world. The untrusted driver is partof the host operating system and provides persistent storage(e.g., via C Standard I/O Streams), networking (e.g., viaBSD sockets), and inter-process communication (e.g., viastdin/stdout/stderr). On top of this, the secure enclave li-brary implements encrypted networking using TLS (e.g., us-ing a standard TLS library), encrypted and sealed storage,attestation, and IPC. Software developers use these secureprimitives to write their secure service. The secure enclavelibrary and application code together form the secure en-clave.

Secure networking Secure networking is provided us-ing TLS. The secure client interface allows connecting to anInternet address using TLS and verifying the connection us-ing default methods (per RFC 5280 [13] and RFC 6125 [32]).There is also an option to connect to another secure servicerunning in an enclave and have it attest to the secure channelparameters. The client will then verify that the attestation

3Available at https://github.com/jethrogb/secserv/.

Table 1: Secure storage types

Benefit

Sealed

KeyedBoth

Protect against blanket access after breach X XProtect against offline attack versus weak user key X XRecover data after hardware failure X

is valid and matches the expected server enclave identity.The secure server interface will listen on a specified portand accept TLS connections from clients. This is the maincommunication mechanism for a service using our architec-ture. There is also an option to accept connections fromclients that are themselves running in an enclave and havethat client identify itself and attest to the secure channelparameters.

Secure storage The secure storage interface allows theapplication to store persistent data safely. The interfaceprovides access to different data objects identified by theirname or path, while using an encrypted storage backend.There are three possible keying schemes for encrypting thedata before storage: using the sealing key, a user key, orboth (encrypted with a user key, then sealed). The differentschemes have different benefits as shown in Table 1. In caseof a breach—e.g., due to a faulty update (see §5.3), or acode bug—using sealing only, in its simplest form, is inade-quate. Further, sealing—in its simplest form—is hardware-dependent and any sealed data is lost after a hardware fail-ure. Using a per-user key based on the user’s passwordenables password-guessing attacks if the password is weak.This is of particular concern since in addition to online at-tacks via the normal service authentication mechanism thatall Internet services have to deal with, in our model an ad-versary can perform offline attacks on the stored data.

Attestation The attestation interface allows a secureenclave to have the hardware attest to a key. It can alsoverify that attested statements match a certain identity andextract the key that was attested to.

IPC Services may use inter-process communication,e.g., for inputting configuration data and logging. Sincethis channel is not secure, no sensitive information shouldbe logged through this channel, and it must not be usedfor configuration that changes the security properties of theservice. Instead, such configuration needs to be part of theenclave measurement.

4.2 Secure server

Untrusteddriver

Enclavelibrary

Applicationcode

Secure enclave

Operating system

CPUStorageNetwork

Figure 3: Secure service architecture. The shadedblocks are not trusted.

5

Using the primitives defined in the previous section, wecan build an unalterable secure service. Keeping the privatekey K−1

server of a TLS server in sealed storage and never ex-porting the key outside the enclave ensures only a particularsecure enclave instance can have access to it. This meansthat when one establishes a TLS connection with a serverthat uses that Kserver to authenticate its key exchange, theserver endpoint is guaranteed to terminate inside the en-clave.

Since the private key should never exist outside the en-clave, it must be generated inside the enclave. The key setupprocedure for the secure service enclave is shown in Figure 4.Input and output happens through the IPC channel.

All the service’s static content (e.g., for a website, im-ages and layout elements) must be included in the serverbinary that will be measured upon enclave startup. All dy-namic/user content must be stored in secure storage.

4.2.1 Horizontal scalingOnce one instance of a service is running, another instance

can connect to it and they can both verify that they’re in-stances of the same program. After the verification, sensi-tive information can be shared (over a secure channel estab-lished using a secure key-exchange protocol). Any kind ofdistributed service can be supported this way.

4.2.2 Multiple enclavesA service might consist of multiple parts, for example a

database server and an application server. The enclavesshould validate each other’s identity and establish a securechannel between the two. There are at least two secure waysto implement this.

Consider an enclave A that accepts connections fromclients and provides controlled access to information basedon the client’s identity. A second enclave B wants to use en-clave A’s service and has fixed A’s identity in its loaded andmeasured code. Enclaves A and B establish a secure channeland both attest to their parameters. Enclave B can verifyA’s attestation and see that the identity matches what isexpected. Enclave A can verify B’s attestation and provideB access to the information B is authorized to access.

If both enclaves must verify each other’s identity using em-bedded identities, there is a chicken-and-egg problem. Sincethe identity of a program changes when including a differ-ent identity, it’s not possible for both programs to have theother’s identity fixed in its code. Also, it’s not secure to relyupon a system administrator to sign the identities of the twoenclaves, since an insider could falsely sign the identity of amalicious enclave. One solution is to combine multiple pro-grams into a single one with multiple operating modes. Nowthe same solution used for horizontal scaling can be applied.

4.2.3 UpdatesWhen a service is updated, its persistent data will need to

be updated too. Data encrypted with a user-dependent keycan be used directly by the newer version. However, sincethe new service identity will be different from the previousversion, all data stored in sealed storage is lost. Sealed datawill need to be moved to the new version before the oldversion can be retired.

A secure channel will need to be established between theold and the new version, see §5.3.1 for details on the authen-tication of this channel. Once the channel is established,

Begin

Valid private keyin sealed storage?

Valid certificatein sealed storage?

Start listeningTLS server

Generate &store private key

Input & storecertificate

Output certificatesigning request

no

yes

noyes

Figure 4: Key management.

the old version can unseal the data in question and send itacross. The new version receives the data and immediatelyputs it in sealed storage. If there is too much data to betransferred over the secure channel, instead that data shouldbe encrypted with an enclave-generated secret key. The keyitself can then be stored in sealed storage and transferredfor updates.

5. CLIENT VERIFICATION OF SECURESERVICES

The previous section described how to construct a secureservice. This section will explain how a client connecting tosuch a service can verify that service.

Consider the simple scenario in which the server sends anattestation A(Ienclave) to the client as part of establishinga secure channel. The client will need to verify both theattestation and the identity. As a straw-man proposal, en-vision a service provider distributing a client program thatincludes a fixed identity and can verify attestations expectedfor a particular service. This would require users to installa different client per service they want to use. Additionally,since the identity is fixed in the client, service updates wouldalso require a client update.

A more general client could contain the logic to be able toverify all possible attestation mechanisms, as well as main-tain a list of all acceptable identities. Done naively, thiswould be worse logistically, since now a new client needs tobe distributed for every service update.

5.1 Attestation TransparencyInstead of creating this new verification mechanism that

clients would need to implement, we build a verificationmechanism on top of an existing mechanism that clients al-ready know how to use: Public-Key Infrastructure. Underour proposed scheme, all the client needs to do to trust thesecure service is verify the TLS server certificate using stan-dard existing methods.

Our scheme, called Attestation Transparency, is an ex-tension of the Certificate Transparency framework [22].Remember that in our unalterable secure service model,

6

demonstrating posession of K−1server by an entity implies that

it is a secure service instance. The core idea of Attesta-tion Transparency is that a secure service provider pub-lishes (once) in the Attestation Transparency log an attestedstatement A(Ienclave,Kserver). With this, they announce tothe world that an entity possessing K−1

server is an instanceof Ienclave. The secure service provider also obtains a validTLS server certificate for Kserver through normal means andinputs it into the enclave. The certificate binds a CommonName (CN) to the key Kserver, and the published attestedstatement binds that to the identity Ienclave. When a clientestablishes a TLS connection with the enclave, it verifiesthe certificate and the enclave uses its K−1

server to sign thekey exchange, after which the client has established a securechannel with the enclave. The whole process is depicted inFigure 2 on page 4.

An Attestation Transparency monitor performs a similarfunction to a Certificate Transparency monitor. The ATmonitor continuously watches the CT logs for certificates is-sued to the CN identifying the secure service it is interestedin. Once a new certificate with public key K is logged, themonitor checks the AT logs to see if any attested statementsA(I,K) with that same key K exist. If such an attestedstatement does exist, the monitor checks whether the iden-tity I is valid for that service. If the identity is invalid, or noattested statement was found in the log, the monitor raisesthe alarm.

To prevent spamming, an AT log might require proof ofexistence of a valid certificate in the CT logs before acceptingstatements for submission. As such, there can be a shortperiod of time where a certificate will exist in the CT logswithout a corresponding statement in the AT logs. Monitorswill need to take this into account and choose an appropriatewait time (e.g. two maximum merge delays) before raisingthe alarm. This wait time is the maximum time duringwhich clients could be vulnerable to attack, before it couldbe noticed.

5.2 Incremental deployment — logsWhile from the previous description it sounds like the AT

log is separate from the CT log, this is not necessarily thecase. Instead, attested statements can be included in a cer-tificate as a X.509 Certificate extension.4 The secure servicecan output the attested statement in the requested exten-sions section of its certificate signing request. As CertificateTransparency is already being deployed, this means Attesta-tion Transparency does not require any new infrastructure.We propose minor changes to CT to support AT, along withan incremental deployment path towards a smoother processin the future.

5.2.1 Fake attestation certificatesCurrently the only data that can be included in the CT

logs are certificates and precertificates.5 To prevent spam,the only certificates accepted in the logs are those signed bya known CA.

In order to publish attested statements in the CT logs,

4We have allocated OID arc 1.3.6.1.4.1.4995.1000.4.1 for thispurpose.5Precertificates are similar to regular certificates, conveyingthe same information. However, they are constructed insuch a way that they can’t be used in place of a regularcertificate.

we propose that CT logs also accept (pre)certificates froman ‘Attestation Authority’ (AA). This is a fake CertificationAuthority that only issues pre-certificates and is not trustedby regular TLS clients. The AA follows a simple proce-dure: it takes as input a certificate, a Signed CertificateTimestamp and a certificate signing request that includes astatement as an extension. The AA verifies the certificateand SCT and it verifies the CSR includes the same publickey. It will then issue a precertificate with the same SubjectName and public key, including the statement extension anda pointer to the real certificate. The AA will only issue oneprecertificate per real certificate.

5.2.2 Attested statement log entriesAn alternate first step in the deployment process is to

move the Attestation Authority’s responsibilities into theCT log server. This requires a change in the CT specifica-tion to add a new entry type for attested statements. Theinputs for the submission procedure will be the same, theverification and spam protection measures will be the same,only the output will be an attested statement-type entry inthe CT log as well as an SCT for this entry.

While this setup increases the functionality and complex-ity of the CT log, it reduces the logistical complexity com-pared to using an Attestation Authority.

5.2.3 Certificate extensionsIt would be much more convenient to just include the

attested statement as an extension in the actual valid end-entity TLS certificate. This would eliminate the need forany changes to the current CT system. It would also solvethe issue of a potential delay between the appearance of thecertificate and the attested statement in the logs.

It is currently practically infeasible to obtain certificateswith such an extension. We contacted a total of 9 sub-sidiaries of the largest 6 Certification Authorities (Comodo,Symantec Group, Go Daddy Group, GlobalSign, DigiCert,StartCom) to see if they would issue certificates with thisextension. Of the CAs we contacted, 5 did not respond toour inquiry or did not understand the request, 3 were un-able to provide such certificates, and 1 was unsure whetherit would be possible, but if it was, it would cost an additionalUS$5,000. We considered (ab)using an existing extension,but were unable to find a suitable one for the type of datawe’d want to include.

We encourage CAs to support Attestation Transparencyextensions in the future.

5.3 Validating enclave identitiesThe previous discussion depends on being able to deter-

mine what is a valid enclave identity. This is mostly a matterof policy, and as such we present a mechanism that supportsdifferent policies. For each service, some entity or a groupof entities—known as the policy administrator—is in chargeof verifying the policy for an enclave identity. The policyadministrator maintains a private key for each service pol-icy. After verification, the policy administrator signs theenclave code indicating that the policy was met. For ex-ample, the EFF could establish a service that audits codefor privacy violations and certify complying code by signingit. These compliance certificates can be used as an auto-mated or semi-automated mechanism by client software todetermine whether it trusts the code.

7

When a system runs such a signed enclave, it will issue at-testation statements of the form A(Isigner : Ienclave,Kserver).An AT monitor will maintain a list of policy administratorsit trusts for a specific CN. Now, the monitor need not itselfverify the enclave identity in an AT log entry, it can insteadrely on a valid signature from the policy administrator.

5.3.1 Handling updatesThis mechanism also enables code updates to services by

having an old version of the service check the policy for thenew version. This check is embedded in the code for the oldservice, and has undergone the same vetting process as therest of the code. The whole process for updating from anold service S1 to a new service S2 is as follows.

DS2−−−−−−−−−→ P (1)

When a developer D is ready to update their service, theywill send their binary S2 and optionally documenting mate-rials to the policy administrator P .

PSigP (I2)−−−−−−−−−−−−−→ D (2)

The administrator will verify that the new code meets thepolicy and sign it.

SP : S2CSR(K),A(P :I2,K)−−−−−−−−−−−−−−−−−−−−→ CA,AT (3)

The service provider SP will launch the signed enclave S2,which will output a certificate signing request including theattested statement. The service provider submits the CSRto a CA and the attested statement will be published to theAT logs.

CACertCA(K)−−−−−−−−−−−−−−→ SP : S2,CT (4)

CA will sign a certificate. The certificate will be submittedto the CT logs. With this publication, the policy admin-istrator has announced to the world that Ienclave conformsto the policy established by Isigner. The service providerinputs the certificate into the signed enclave and launchesthe service. AT monitors will see the new service with thenew certificate and can use the CT/AT log to verify thateverything is in order.

S2secure channel←−−−−−−−−−−−−−−−→ S1 (5)

S2 establishes a mutually authenticated secure channel withS1. Both sides must verify the code identity of the otherside through attestation, as well as check the CertificateTransparency proofs for their keys. Doing both validatesthe service code and ensures that there is a public recordfor this particular service instance.

S2Proof(A(P :I2,K)∈AT )−−−−−−−−−−−−−−−−−−−−−−→ S1 (6)

S2 provides S1—which is configured to accept policy state-ments from policy administrator P—with proof that an at-testation A(P : I2,K) appears in the AT logs.

S2sealed data←−−−−−−−−−−−−−− S1 (7)

S1 will subsequently transfer its sealed data to S2.The update process can be performed without downtime

for the users. Users can keep using the old version of theservice as long as its certificate is still valid. Once the newcertificate has been obtained and the required publications

in the Transparency logs have been made for the updatedservice, it can start accepting connections. From then on,clients will see the new certificate and Transparency logproofs, indicating that they are now using the updated ser-vice.

5.3.2 Enclave policiesA policy shall at least require that the TLS private key

will not be leaked and that updates shall be considered validonly when accompanied with a proof that they appear in theAttestation Transparency logs. While a policy administratormay issue a signed policy statement erroneously, the state-ment will be ineffective until published. Once published,others can hold the policy administrator accountable. It isalso important that an entity controlling a Certificate Trans-parency log signing key must not also be an entity control-ling a policy signing key. Such an entity would be able toissue signed policy statements and obtain a signed ‘proofof inclusion’ from the log without actually publishing thestatement.

Policies can cover a variety of use cases from most trans-parent to not transparent. Care must be taken when a sin-gle party has a fair amount of control over what would beconsidered a secure service under their policy. Such con-structions should always be paired with the ability for inde-pendent entities to verify their claims post facto using trans-parency. We propose the following policies:

Open-source reproducible builds The software de-veloper publishes their source code publicly with a mech-anism for reproducible builds. The same developer doublesas the policy administrator and builds the binary and signsit before handing it off to the service provider.

Independent audit The software developer handstheir source code to an independent auditor. The auditorvets the secure service and describes the security propertiesthe service has in a policy. It will sign the binary and publishthe policy. When, later, the developer submits an updatedversion of the software, the auditor checks whether it meetsthe security requirements per the established policy.As an extension to this scheme, an independent auditorcould maintain several standard policies. For example theymight have a ‘secure IMAP service’ policy. Anyone will beable to write software that adheres to the policy, and the au-ditor can verify and sign all such software. This effectivelycreates an interoperable system of secure IMAP services,where data can be transferred from one service to the otherwhile maintaining the security properties.

Self-published with legal obligations The softwaredeveloper hands their source code to a publisher. The pub-lisher builds the binary and signs it before handing it off tothe service provider. The publisher also promises (e.g. byincorporating a clause in their terms and conditions) thatenclaves they sign exhibit certain properties.

Enterprise-local audit An enterprise might maintaina set of policies for secure services it runs internally. Theycan have an internal audit team that vets service updates.This way they can have the benefits of protection from in-sider attacks as well as local control.

5.4 Incremental deployment — clientsWe present an incremental deployment path for Attesta-

tion Transparency that makes it immediately useful for to-day’s clients while improving security guarantees for future

8

clients.Current clients Initially, clients without Certificate

Transparency support will benefit from the existence of theCT/AT ecosystem, as independent entities can monitor thepublished certificates and statements. However, there are noguarantees for such clients and targeted attacks are possible.While the CT logs might include a valid certificate for somedomain, a client without CT support can be presented witha valid certificate that does not appear in the logs, and theclient would be none the wiser.

Clients supporting Certificate TransparencyOnce clients support Certificate Transparency, a processwhich is already in motion, they will get most of the benefitsof Attestation Transparency as well. Suppose a servicehas subscribed to our secure service paradigm, promisingto publish in the AT log in conjunction with the CT log.Then, by checking the Signed Certificate Timestamps whensetting up a connection, a client can be sure that the serverpublished its attestation if it has one. A user still needsto rely on manual verification or word-of-mouth to knowwhether a particular service at a particular domain is infact a secure service.

Clients supporting Attestation Transparency Aclient that can check the attested statements will be able toindicate to the user that it is in fact connected to a secureservice.

Clients supporting remote attestation Clientssupporting remote attestation get even stronger guaranteesthan those just supporting Certificate Transparency. Withremote attestation, a client can verify that the server theyare connected to is actually running inside a secure enclave.This is helpful in case a server’s TLS private key got leakedinadvertently. A third party could run a modified servicewith the TLS private key thus impersonating the secureservice under the previous three mechanisms. When usingremote attestation directly, this third party could notproduce a correct attestation if the service were modified.

6. EVALUATIONSince Intel SGX is not yet available at the time of writ-

ing,6 we performed our evaluation by implementing secureservices on top of CloudProxy [25]. CloudProxy provides anabstraction over security primitives similar to the ones IntelSGX provides. A current implementation is available usingLinux with a hardware root-of-trust based on the TrustedPlatform Module (TPM) interface. This implementationhas a much larger TCB than pure SGX would have, in-cluding the bootloader, the CloudProxy hypervisor, and theentire OS kernel.7

We do two case studies on secure services to evaluatedeveloper effort and performance loss. First, to see howdifficult programming with our secure service paradigmwould be, we implemented a secure web server “fromscratch.”Second, we molded an existing web applicationstack into our secure service model to test the performanceloss incurred. The original CloudProxy technical report [25]includes further performance measurements. We do expect

6Hardware with SGX is currently available in both Core andXeon E3 processors, but Intel had yet to release the requiredadditional software support.7The OS kernel could be significantly reduced or even re-placed by a small run-time library but we have not donethis.

slightly better performance on CloudProxy than we wouldsee on real SGX hardware, since there is no runtime mem-ory encryption overhead. All our tests were performed onan Intel NUC 5i5MYHE with Intel Core i5-5300 processor,8GB RAM and a Samsung 850 EVO SSD, running Ubuntu15.04 with a Linux 4.0.7 kernel.

6.1 File hosting serviceWe implemented the secure service interface in a memory-

safe and type-safe but fast language, Rust [31]. The interfacelibrary is designed in a way that makes it easy to swap outdifferent parts (such as OpenSSL for another TLS library orCloudProxy bindings for an SGX runtime).

On top of that we implemented an HTTPS server thatruns a simple file storage service. A user can login using ausername and password. They can then upload files whichwill be encrypted with a key derived from their password.Users can later retrieve these files by logging in again. Thewebserver is entirely self-contained—the binary includes allHTML, JavaScript, CSS and images that are going to beserved to the user.

The webserver itself is 425 SLOC of Rust and 168 SLOCof HTML/JavaScript/CSS plus the jQuery framework andthe Bootstrap theme. The interface library is 983 SLOCof Rust, plus an additional 262 SLOC of Rust and 1154SLOC of C/C++ for the CloudProxy bindings. In addition,the webserver and library pull in an additional 59000 SLOCof Rust for statically linked dependencies. These numbersrepresent an upper bound on the actual number of lines asthey include inline unit tests and feature-gated code that isnot compiled—such as 18000 lines in unused Windows APIbindings, and 5000 lines in an unused HTTP/2 implementa-tion. The CloudProxy bindings pull in another 17000 SLOCof C/C++ for statically linked dependencies in addition tolibprotobuf. We link to OpenSSL and common system li-braries as well.8

6.2 Web forumWhile the goal of this paper is not to show that legacy

applications can be ported to our architecture—others havealready shown similar results [6]—we do want to show thatthe architecture is fit for running large and complex Internetservices. We mold an existing web server stack consisting ofApache, PHP, SQLite and phpBB to fit in our secure servicearchitecture using CloudProxy and Linux OS features. Theinsecure filesystem, networking and IPC are provided as nor-mal by the OS. Secure storage and attestation is providedby a small set of binaries that provide access to the Cloud-Proxy interface. Secure networking is provided by Apache’smod ssl and the keying interface by OpenSSL. Database en-cryption is provided by SQLCipher, [41] an enhanced versionof SQLite that encrypts the on-disk files using AES. Thedatabase encryption key itself is stored in sealed storage.

To make the web stack measurable, all files (binaries, sys-tem libraries, web content, etc.) are bundled into a sin-gle bootstrap binary. The bootstrap binary creates a newroot filesystem namespace—isolating this process’s view ofthe filesystem from the rest of the system—from an emptyRAM-disk and fills it with the bundled files. It then startsan initialization script that sets up the keys according to

8To be precise: libz, libdl, libpthread, libm, libstdc++,libgcc s, libc.

9

0.5 1 1.5 2 2.5 30

0.2

0.4

0.6

0.8

1

t (s)

Cum

ula

tive

fract

ion

of

jobs

Plain

Encrypted

Encrypted+CloudProxy

Figure 5: Cumulative Distribution Functions of theprocessing times for 3 different phpBB setups. Foreach line, n = 1692.

Figure 4 as well as the database encryption key and thenlaunches the Apache web server. The bundle is about 76MB.

We measure the performance of three related web forumsetups to see the effect encryption and the CloudProxy plat-form have. The first setup, plain is a vanilla Apache, PHP,SQLite and phpBB installation. The second, encrypted,swaps out SQLite for SQLCipher. The final setup, en-crypted+CloudProxy, runs the aforementioned bundled mea-sured web stack.

For each setup, we load a small real forum dataset with 21users, 94 topics and about 1300 initial posts. We then runa set of workers that access the forum over HTTPS simul-taneously. Each user gets their own worker. Each workerselects a random topic they want to post to and performsthe following procedure: login, navigate to the appropri-ate topic listing, visit the topic, go to the last page, go tothe reply screen, and post a reply. Each topic is visited 6times in total. We measure the aggregate time it takes tocomplete each topic-posting procedure from login to post.We perform the entire test procedure 3 times, for a total of3 · 6 · 94 = 1692 measurements per setup total.

The average processing time for the plain version is µ =1.671s (σ = 0.693). For the encrypted version µ = 1.731s(σ = 0.633) and for the encrypted+CloudProxy version µ =1.738s (σ = 0.646). The encrypted/encrypted+CloudProxysetups are about 4% slower than the plain version. Thereis no significant difference between the encrypted andencrypted+CloudProxy setups (Two-sample Kolmogorov-Smirnov test, p > 0.73). The CDFs in Figure 5 showthat all requests in the encrypted setups are just slightlyslower than in the plain version, as opposed to the tim-ing distribution being completely different. It also clearlyshows the lack of difference between the encrypted and en-crypted+CloudProxy setups, either setup being faster thanthe other at different percentiles.

7. DISCUSSION

7.1 Possible applicationsThe previous case studies are examples of possible appli-

cations of this technology, but many more exist. We discusssome.

7.1.1 Browser-based cryptographic systemsOne of the arguments against doing in-browser cryptogra-

phy using JavaScript is its poor auditability [23,30]. Even ifa user assures themselves of the quality of the cryptographicsystem by carefully inspecting a page’s DOM tree, there is noguarantee the server will send you the exact same page thenext time you visit it. With an Attestation Transparency-supported secure service, a user does get that guarantee. Be-cause the logic for sending HTTP responses is fixed withinthe unalterable secure service’s identity, a client will receivethe same script every time. This in combination with theWeb Crypto API [34] brings us closer to being able to dobrowser-based crypto properly and securely.

7.1.2 Bootstrapping secure web applicationsIn the web application world, many production updates

are pushed out every day. Having to go through the up-date process and requesting a new TLS certificate every timemight not be practical. It is not necessary, however, to in-clude an entire website within the secure enclave.

Instead, one can create a small core web page at a well-known URL (e.g. https://example.com) that will load fur-ther web content. Even untrusted content (e.g. from not-audited.example.com) can be included when using a tech-nique such as HTML5 privilege separation [3]. The smallcore is secure and verified and provides only security func-tionality to the web application, which should require in-frequent changes. The untrusted part of the website canbe developed and updated frequently as normal, while notbeing able to cause harm because of the privilege separation.

Including static external content, e.g. from Content De-livery Networks, is supported securely through the recentSubresource Integrity draft [2]. Websites can include a hashwith a URL on an external resource which will be checkedby the browser.

Including dynamic external content is trickier. If an ex-ternal site is known to be a secure service defined in thispaper, verifying its known public key should be sufficient toensure the safety of loading its contents. The SubresourceIntegrity mechanism could be extended to allow public keypinning on an external resource.

7.1.3 Encrypted e-mail storage serverAn e-mail provider could run their SMTP/IMAP stack as

two separate secure services. The IMAP server, storing theuser’s e-mails, will maintain an internal directory of usersand corresponding encryption keys. Only the user will haveaccess to their e-mails which are encrypted at rest. TheSMTP server, when receiving mail for a local user, will ob-tain the local public key for that user from the IMAP serverand encrypt the received message before handing it to theIMAP server for storage.

This setup provides secure encrypted e-mail storage forlegacy IMAP clients including the inability of an insider toobtain the user’s e-mails or credentials. Additionally, anSMTP client could verify the server’s identity before sub-mitting mail, making sure that the e-mail will get deliveredto a secure mailbox.

7.2 Limitations and open research questionsThe research in this paper does not address availability

questions at all. Denial of service is a valid attack that anadversary might perform. Worse, destruction of user data is

10

also possible. In order to get the cloud environment closerto the desktop model, these issues need to be resolved.

Secure enclaves don’t keep any presistent state. As suchenclaves can’t know whether the untrusted party it’s relyingone for data storage is returning the most recent versionof its encrypted state. This attack against the freshness ofdata is called a rollback attack. Generic solutions for thisproblem—such as Memoir [28]—exist and can be applied tothe design proposed in this paper.

In order for a user to be able to fully trust a ‘secure webapplication’ as defined in the previous section, they needto know that the data they see or the input they provideis handled securely. The current web hardly provides suchmechanisms. JavaScript and CSS on a page can arbitrarilychange page elements to re-order information, capture userinput, or even read cross-origin data [36]. More researcheffort is needed to provide the user with a secure and trust-worthy user interface on the web.

The security of our system relies on an adversary not beingable to break in to the secure enclave. Even if the hardwareis infallible—which it isn’t—a simple software bug could leaksensitive information or provide an attacker with code ex-ecution capabilities. Bugs are exacerbated by being com-pletely transparent about the code running on a machine.The transparency makes it much easier for an attacker toautomate exploitation of known vulnerabilities. We proposeusing only safe languages such as Rust to write secure ser-vices, but even then there is no guarantee against compilerbugs or developer errors. Further guarantees could be ob-tained by using formal methods.

While SGX in theory provides good isolation, in practice itmight have security flaws. In addition, SGX does not aim toprotect against side-channel attacks [21]. The operating sys-tem is in an excellent position to mount side-channel attacksas well as Iago attacks [11]. SGX also has software compo-nents that are critical to its security, which might be moreeasily compromised than the hardware [14]. Compromise ofSGX on the system running the secure service provides anattacker with access similar to that of directly compromisingthe secure service. But, even if SGX is broken, future sys-tems might provide better secure enclave functionality thatcan be used for the secure service design in this paper.

7.3 AdoptionPrevious Trusted Computing approaches have not seen

much practical use. Our scheme differs from most ap-proaches in that the required hardware and software sup-port is only needed on the server side. A single entity candecide to adopt our approach and make it happen withoutbeing dependent on their customer’s hardware or softwarechoices.

Some service providers may be reluctant to adopt ourscheme. However, we believe there is motivation to stronglyconsider it. The ITIF has predicted that the U.S. cloudcomputing industry could lose up to $35 billion by 2016 dueto loss of trust in the post-Snowden era [18]. Forrester Re-search has predicted that losses could eventually be up to$180 billion [35]. Our scheme provides a mechanism thatpartially addresses these trust concerns.

8. RELATED WORKOur work builds on the development of several important

advancements.

An early line of research strove to verify the integrity of aknown system stack. This was important since malicious orunreliable system software would certainly prevent the se-cure operation of even well written applications which werethemselves safe. The AEGIS system [5] proposed verify-ing the entire stack by having a chain of integrity checkswhere each step in the boot process verifies the next beforeproceeding. This work has been extended to protect othercritical software like BIOS code which, if corrupted, presentsthe application developer with insurmountable barriers forsafe operations. Parno et al. provide an overview [29] of therelevant techniques and research in this area.

Wobber et al. describe how in the Taos operating sys-tem they treat code (represented, say, as a cryptographichash) as a first class security principal which can be authen-ticated [39]. This enabled distributed programs to establishan authentication and authorization model that was as richand reliable as that for a single program running in batchmode on a single machine. It is not always desirable toattest directly to code principals, as software can changefrequently and can exist in many different configurations.Property-based attestation [12] aimed to solve this by us-ing properties of software, instead of the software itself, assecurity principals.

Since secure distributed computing relied on increasinglywell studied and accepted cryptographic mechanisms, re-searchers sought a key management mechanism that al-lowed remote verification of program identity and isolationproperties of program elements running on widely dispersedmachines. Trusted computing primitives combining a dis-crete security chip [37] coupled with processor features [20]provided the necessary underlying capabilities. Brannocket al. propose a Secure Execution Environment [9] withproperties similar to our secure enclave. The CloudProxyTao [25] is a consolidated execution environment imple-mented recursively at each layer of software (VMM, OS,Linux process, container, plug-in) that programmers couldgenerally use without extensive training using existing pro-gramming tools. The relentless march towards cloud com-puting made these capabilities more critical since cloud datacenters are widely dispersed, employ powerful insiders who,absent these capabilities can access or control keys and un-detectably change code running security critical services, vi-tiating many of the hoped for benefits of scale and elastic-ity. These new primitives allowed the safe operation of wellwritten programs on computers that also ran untrustworthyapplications written by adversaries.

Researchers recognizing that even with the foregoing ad-vances, large TCBs made security difficult to assure andmaintain and thus attempted to minimize the footprint tohelp ensure that security guarantees could be credibly met.To remedy this, Hawblitzel et al. propose a system usingIronclad Apps [16] for secure remote computations by for-mally verifying the entire server stack. Flicker [26] empha-sized application of the Trusted Computing primitives onsmall services within an application isolating them from therest of the application and the operating system; for exam-ple, Flicker enclaves were well suited as a virtual HardwareSecurity Module or as an authentication enclave that useda long term secret; the security model ensured that the OS,other applications and other portions of the same applicationcould not get private key material. However, it was shownthat Intel Trusted Execution Technology, which Flicker is

11

based on, is not secure [40].Various other systems were proposed to marry cloud com-

puting with trusted computing, such as Self-service CloudComputing [10], Cryptography-as-a-Service [7], and My-Cloud [24]. These systems focused on providing trust in thecloud hypervisor to a customer of the cloud service provider,not on providing trust of Internet services to users of thoseservices.

SGX [19] employed specialized hardware for this samepurpose and also encrypted enclave memory in DRAM thusprotecting from an adversary with system bus access. Sev-eral recent works employ SGX to protect cloud service com-ponents. Haven [6] employed SGX to run MS SQL Serverentirely in a secure enclave. Clients of that database servercould benefit from the Attestation Transparency Frameworkto verify the server they’re connecting to. VC3 [33] imple-ments secure MapReduce operations for the Hadoop dis-tributed computation platform using SGX.

Attacks on PKI [1] threatened the trustworthiness of co-dependent services which can benefit from the executionflexibility of cloud computing and the vast quantity of com-munity curated data. This prompted the development ofCertificate Transparency [22] to highlight misissued certifi-cates.

9. CONCLUSIONIn this paper we have shown how to build secure services

and how to enable clients to verify those services. Thisbrings to clients the benefits of the cloud—including scal-ability, availability, elasticity, maintainability—while guard-ing against principal attacks (e.g. from insiders) that makecloud usage worrisome.

We have presented a system enabling flexible policy op-tions to let users meaningfully choose security properties.Policies can be established by anyone, including software de-velopers, auditors, independent organizations and communi-ties. The policies are enforced through a compulsory trans-parency mechanism that brings malicious intent to light.This deters bad actors since they can be easily identified forpurposes of legal action or reputation damage.

We have extended the certificate transparency model tocode, providing a technical mechanism for users to relyon the security principal which ultimately ensures secu-rity properties—code. In addition, we provide flexible trustmodels that allow any user to meaningfully adduce behaviorguarantees from actual implementations. We demonstratethat resulting systems can be nearly as efficient and scal-able as existing services and provide strong protection frommischievous providers, foreign governments and sloppy clouddata center operations.

All proposed mechanisms include incremental deploymentpaths which make our techniques usable now for present-day clients, whereas future deployment will increase secu-rity guarantees. In conclusion, we have presented a flexible,practical mechanism to build secure Internet services.

10. ACKNOWLEDGEMENTSWe thank Jon McCune and our anonymous reviewers for

their feedback. This work was supported by Intel throughthe ISTC for Secure Computing, AFOSR under MURIaward FA9550-12-1-0040, and NSF under CCF-0424422.

11. REFERENCES[1] Comodo, DigiNotar Attacks Expose Crumbling

Foundation of CA System. ThreatPost, 2011. URL:https://threatpost.com/090211/75609.

[2] D. Akhawe, F. Marier, F. Braun, and J. Weinberger.Subresource Integrity. W3C working draft, W3C, July2015. URL: http://www.w3.org/TR/SRI/.

[3] D. Akhawe, P. Saxena, and D. Song. PrivilegeSeparation in HTML5 Applications. In 21st USENIXSecurity Symposium, pages 429–444. USENIX, Aug.2012.

[4] I. Anati, S. Gueron, S. Johnson, and V. Scarlata.Innovative Technology for CPU Based Attestation andSealing. In Proceedings of the 2nd InternationalWorkshop on Hardware and Architectural Support forSecurity and Privacy, 2013.

[5] W. A. Arbaugh, D. J. Farber, and J. M. Smith. Asecure and reliable bootstrap architecture. In IEEESymposium on Security and Privacy, pages 65–71,1997.

[6] A. Baumann, M. Peinado, and G. Hunt. ShieldingApplications from an Untrusted Cloud with Haven. In11th USENIX Symposium on Operating SystemsDesign and Implementation, pages 267–283. USENIXAssociation, Oct. 2014.

[7] S. Bleikertz, S. Bugiel, H. Ideler, S. Nurnberger, andA.-R. Sadeghi. Client-controlledcryptography-as-a-service in the cloud. In Proceedingsof the 11th International Conference on AppliedCryptography and Network Security, pages 19–36,2013.

[8] R. Boivie and P. Williams. SecureBlue++: CPUsupport for secure execution. Technical report, IBMResearch Report, 2013.

[9] K. Brannock, P. Dewan, F. McKeen, andU. Savagaonkar. Providing a Safe ExecutionEnvironment. Intel Technology Journal, 13(2):36–51,2009.

[10] S. Butt, H. A. Lagar-Cavilla, A. Srivastava, andV. Ganapathy. Self-service cloud computing. InProceedings of the 2012 ACM Conference on Computerand Communications Security, pages 253–264, 2012.

[11] S. Checkoway and H. Shacham. Iago attacks: Why thesystem call api is a bad untrusted rpc interface. InProceedings of the Eighteenth International Conferenceon Architectural Support for Programming Languagesand Operating Systems, ASPLOS ’13, pages 253–264.ACM, 2013.

[12] L. Chen, R. Landfermann, H. Lohr, M. Rohe, A.-R.Sadeghi, and C. Stuble. A protocol for property-basedattestation. In Proceedings of the 1st ACM Workshopon Scalable Trusted Computing, pages 7–16, 2006.

[13] D. Cooper, S. Santesson, S. Farrell, S. Boeyen,R. Housley, and W. Polk. Internet X.509 Public KeyInfrastructure Certificate and Certificate RevocationList (CRL) Profile. RFC 5280 (Proposed Standard),May 2008. Updated by RFC 6818. URL:http://www.ietf.org/rfc/rfc5280.txt.

[14] V. Costan and S. Devadas. Intel SGX explained.Cryptology ePrint Archive, Report 2016/086, 2016.https://eprint.iacr.org/2016/086.

[15] E. Felten. A court order is an insider attack, 2013.

12

URL: https://freedom-to-tinker.com/blog/felten/a-court-order-is-an-insider-attack/.

[16] C. Hawblitzel, J. Howell, J. R. Lorch, A. Narayan,B. Parno, D. Zhang, and B. Zill. Ironclad Apps:End-to-End Security via Automated Full-SystemVerification. In 11th USENIX Symposium onOperating Systems Design and Implementation, pages165–181. USENIX Association, Oct. 2014.

[17] M. Hoekstra, R. Lal, P. Pappachan, C. Rozas,V. Phegade, and J. del Cuvillo. Using InnovativeInstructions to Create Trustworthy SoftwareSolutions. In Proceedings of the 2nd InternationalWorkshop on Hardware and Architectural Support forSecurity and Privacy, 2013.

[18] Information Technology and Innovation Foundation.How much will PRISM cost the U.S. cloud computingindustry?, 2013. URL:http://www.itif.org/2013-cloud-computing-costs.pdf.

[19] Intel Corporation. Intel Software Guard ExtensionsProgramming Reference, October 2014.

[20] Intel Corporation. Intel 64 and IA-32 Architectures,Software Developer’s Manual, volume 2C: InstructionSet Reference, chapter 5: Safer Mode ExtensionsReference. 2015.

[21] Intel Corporation. Intel Software Guard ExtensionsEnclave Writer’s Guide, 2015.

[22] B. Laurie, A. Langley, and E. Kasper. CertificateTransparency. RFC 6962 (Experimental), June 2013.URL: http://www.ietf.org/rfc/rfc6962.txt.

[23] N. Lawson. Final post on Javascript crypto, 2010.URL: http://rdist.root.org/2010/11/29/final-post-on-javascript-crypto/.

[24] M. Li, W. Zang, K. Bai, M. Yu, and P. Liu. Mycloud:Supporting user-configured privacy protection in cloudcomputing. In Proceedings of the 29th AnnualComputer Security Applications Conference, pages59–68, 2013.

[25] J. L. Manferdelli, T. Roeder, and F. B. Schneider. TheCloudProxy Tao for Trusted Computing. TechnicalReport UCB/EECS-2013-135, EECS Department,University of California, Berkeley, Jul 2013. URL:https://github.com/jlmucb/cloudproxy.

[26] J. M. McCune, B. J. Parno, A. Perrig, M. K. Reiter,and H. Isozaki. Flicker: An Execution Infrastructurefor TCB Minimization. In Proceedings of the 3rd ACMSIGOPS/EuroSys European Conference on ComputerSystems, pages 315–328. ACM, 2008.

[27] F. McKeen, I. Alexandrovich, A. Berenzon, C. Rozas,H. Shafi, V. Shanbhogue, and U. Savagaonkar.Innovative Instructions and Software Model forIsolated Execution. In Proceedings of the 2ndInternational Workshop on Hardware andArchitectural Support for Security and Privacy, 2013.

[28] B. Parno, J. R. Lorch, J. R. Douceur, J. Mickens, andJ. M. McCune. Memoir: Practical state continuity forprotected modules. In 32nd IEEE Symposium onSecurity and Privacy, pages 379–394, 2011.

[29] B. Parno, J. M. McCune, and A. Perrig.Bootstrapping trust in commodity computers. In 31stIEEE Symposium on Security and Privacy, pages414–429, 2010.

[30] T. Ptacek. Javascript Cryptography Considered

Harmful, 2011. URL: http://www.matasano.com/articles/javascript-cryptography/.

[31] Rust programming language.https://www.rust-lang.org/.

[32] P. Saint-Andre and J. Hodges. Representation andVerification of Domain-Based Application ServiceIdentity within Internet Public Key InfrastructureUsing X.509 (PKIX) Certificates in the Context ofTransport Layer Security (TLS). RFC 6125 (ProposedStandard), Mar. 2011. URL:http://www.ietf.org/rfc/rfc6125.txt.

[33] F. Schuster, M. Costa, C. Fournet, C. Gkantsidis,M. Peinado, G. Mainar-Ruiz, and M. Russinovich.VC3: Trustworthy Data Analytics in the Cloud UsingSGX. In 36th IEEE Symposium on Security andPrivacy, pages 38–54, May 2015.

[34] R. Sleevi and M. Watson. Web Cryptography API.W3C candidate recommendation, W3C, Dec. 2014.URL: http://www.w3.org/TR/WebCryptoAPI/.

[35] J. Staten. The cost of PRISM will be larger than ITIFprojects. Forrester Research, 2013. URL:http://blogs.forrester.com/james staten/13-08-14-thecost of prism will be larger than itif projects.

[36] P. Stone. Pixel perfect timing attacks with HTML5.Presented at Black Hat USA 2013, 2013. URL:http://www.contextis.com/documents/2/BrowserTiming Attacks.pdf.

[37] Trusted Computing Group. TPM Main Specification,2011. URL: http://www.trustedcomputinggroup.org/resources/tpm main specification.

[38] J. Winter. Trusted Computing Building Blocks forEmbedded Linux-based ARM Trustzone Platforms. InProceedings of the 3rd ACM Workshop on ScalableTrusted Computing, pages 21–30. ACM, 2008.

[39] E. Wobber, M. Abadi, M. Burrows, and B. Lampson.Authentication in the Taos operating system. ACMTrans. Computer Systems, 12(1):3–32, 1994.

[40] R. Wojtczuk and J. Rutkowska. Attacking IntelTrusted Execution Technology. Presented at BlackHat DC 2009, 2009. URL: http://invisiblethingslab.com/resources/bh09dc/AttackingIntelTXT-slides.pdf.

[41] Zetetic LLC. SQLCipher.https://www.zetetic.net/sqlcipher/.

13