Federico Mancini - FFIrapporter.ffi.no/rapporter/2014/00920.pdf · Federico Mancini Forsvarets FFI...

41
Hardware-based trust and integrity: Trusted Platform Module (TPM) and Trusted Execution Environment (TEE) – possible building blocks for more secure systems? FFI-rapport 2014/00920 Federico Mancini Forsvarets forskningsinstitutt FFI Norwegian Defence Research Establishment

Transcript of Federico Mancini - FFIrapporter.ffi.no/rapporter/2014/00920.pdf · Federico Mancini Forsvarets FFI...

Hardware-based trust and integrity: Trusted Platform Module (TPM) and Trusted Execution Environment (TEE) – possible building blocks for more secure systems?

FFI-rapport 2014/00920

Federico Mancini

ForsvaretsforskningsinstituttFFI

N o r w e g i a n D e f e n c e R e s e a r c h E s t a b l i s h m e n t

FFI-rapport 2014/00920

Hardware-based trust and integrity: Trusted Platform Module (TPM) and Trusted Execution Environment (TEE)

– possible building blocks for more secure systems?

Federico Mancini

Norwegian Defence Research Establishment (FFI)

10 September 2014

2 FFI-rapport 2014/00920

FFI-rapport 2014/00920

1294

P: ISBN 978-82-464-2428-6 E: ISBN 978-82-464-2429-3

Keywords

Platform Integritet

Hardware tillitsanker

Tiltrodd kjøremiljø

Tiltrodd Platform Modul

Tillit

Approved by

Ronny Windvik Project Manager

Anders Eggen Director

FFI-rapport 2014/00920 3

English summary With the increasing need of information sharing across confidentiality levels and coalition boundaries, it becomes more and more important to have a trusted infrastructure that can be relied upon to run critical software and handle sensitive data in a secure manner. There exist high assurance systems that are designed and certified for these purposes, but there are still various challenges and open problems to be solved before arriving to a well-established and widely accepted solution. Besides, within isolated security domains, one often finds that the deployed equipment does not meet any particular high-security requirement, but it consists of common commodity hardware and software. Therefore, it might be desirable to maintain this kind of equipment, but at the same time make it more secure. In this perspective, we will present and analyze some new hardware technologies developed in the last 5-10 years, but that have only recently started gaining wide popularity in commodity computers, and that offer mechanisms to establish a high-level of trust in a commodity platform. Such technologies are the Trusted Platform Module (TPM) and the Trusted Execution Environment (TEE), that allow a machine to measure its own integrity, report it in a trustworthy manner to a third party, and guarantee the execution of a given piece of code in a sanitized and isolated environment. Although these technologies are not yet mature, and some criticism and issues still remain, they are based on well-established and sound security concepts and the increasing interest from major actors in the IT industry is pushing their development forward at a fast pace. As far as military information systems are concerned, this technology could be very useful in building more trusted systems that can, among other things, support a flexible and secure information exchange between security/classification domains, at least when the security requirements are not so high that specialized high-assurance products are needed.

4 FFI-rapport 2014/00920

Sammendrag Behovet for informasjonsdeling på tvers av forskjellige sikkerhetsdomener og mellom koalisjonspartner øker stadig og forutsetningen for at informasjonsutveksling i det hele tatt skal være mulig, er at tiltrodde og sikre mekanismer for håndtering av sensitive data er til stede. Selv om det finnes enheter med høy tillit som kan brukes til å bygge noen deler av den nødvendige infrastrukturen, finnes det fortsatt mange utfordringer. Blant annet er det stort sett kommersielle løsninger som brukes innenfor et isolert sikkerhetsdomene, og det hadde vært ønskelig å gjøre det eksisterende utstyret, som man allerede er kjent med, sikrere uten å måtte investere i nye store anskaffelser. Av denne grunnen presenterer vi i denne rapporten en oversikt over forskjellige eksisterende teknologier som finnes i dag i kommersielle produkter, og som har som mål å øke tiltro i en plattform ved å beskytte og rapportere deres integritet på en tiltrodd måte. Vi skal fokusere spesielt på Trusted Platform Module (TPM) og Trusted Execution Environment (TEE) og skal kort diskutere hvordan de også kan brukes i en militær sammenheng selv om de ikke alltid kan nå de nødvendige tillitskravene.

FFI-rapport 2014/00920 5

Contents

1 Introduction 7

2 General concepts 8 2.1 Trust, trustworthiness, assurance and integrity 8 2.2 The problem of bootstrapping trust [8] 9 2.3 Trusted boot process 10 2.4 Isolation and Trusted Execution 12 2.5 Secure Storage 13

3 Trusted Computing and TPM 14 3.1 Short history 14 3.2 TPM architecture and features 15 3.2.1 Platform Configuration Registers (PCR) 16 3.2.2 Trusted boot and Core of Root for Trust for Measurement 17 3.2.3 Remote Attestation and Core of Root for Trust for Reporting 17 3.2.4 Secure Storage, Sealing and Core of Root for Trust for Storage 19 3.3 TPM security 21 3.3.1 Problems with the Core Root of Trust for Measurement 22 3.3.2 Physical attacks 22 3.4 TPM adoption and support 23 3.4.1 TPM support 23 3.4.2 TPM challenges 24 3.4.3 MTM, TPM 2.0 and Software TPM 25 3.5 Other Hardware Crypto Modules 25

4 The Trusted Execution Environment 25 4.1 Isolation Mechanisms 26 4.1.1 Language based Isolation 26 4.1.2 Sandbox based Isolation 26 4.1.3 Virtual Machine based Isolation 26 4.1.4 OS-kernel based Isolation 26 4.1.5 Hardware based Isolation 27 4.2 DRTM 27 4.2.1 Tboot and OSLO 28 4.2.2 Problems with DRTM implementations 29 4.2.3 Flicker [60] 30 4.3 ARM TrustZone 30 4.3.1 General concepts 31

6 FFI-rapport 2014/00920

4.3.2 Boot process 32 4.3.3 TrustZone maturity 33 4.3.4 ARM TrustZone Discussion 34

5 Conclusions 35

6 Bibliography 36

Table of Figures Figure 2.1: CC evaluation process ........................................................................................................................ 9 Figure 2.2: The trusted boot process and how compromised components are always detected .....................12 Figure 2.3: Example of sealing. The PCR registers which will be introduced later, is where the system measurements are securely stored. ......................................................................................................................13 Figure 3.1: TPM internals. ...................................................................................................................................15 Figure 3.2: PCR values can be used to verify the integrity of the log file containing the same measurements, but with a human-readable description of their meaning ..................................................................................16 Figure 3.3: Trusted boot with TPM.....................................................................................................................17 Figure 3.4: AIK creation process ........................................................................................................................18 Figure 3.5: Remote Attestation Protocol ............................................................................................................19 Figure 3.6: TPM Key hierarchy from [27] .........................................................................................................20 Figure 3.7: TPM certificates according to the TPM specifications [27]...........................................................21 Figure 4.1: Late Launch technology by Intel .....................................................................................................28 Figure 4.2: Static root of trust Vs. Dynamic root of trust ..................................................................................29 Figure 4.3: The ARM TrustZone Architecture [63]...........................................................................................31 Figure 4.4: Example of the Secure World implementation [52] .......................................................................32 Figure 4.5: A typical boot-sequence of a TrustZone-enabled processor [52] ..................................................32

FFI-rapport 2014/00920 7

1 Introduction As the need for sharing information is increasing, and systems become more and more interconnected, so does the need to protect sensitive information that should remain private from unauthorized disclosure. The protection of confidentiality within the military IT-infrastructure is traditionally handled by either having unidirectional data-flow as in the Bell-LaPadula model [1], or by completely isolating different security domains (air gap). Unidirectional data-flow from a low-classified to a high-classified domain can be implemented by using data diodes that do not allow data to be transferred back. The more security-critical case of high-classified information being declassified in order to be transferred to lower domains is often handled with a manual review and release process. These approaches do not allow for automatic and bi-directional sharing of operation critical information in a coalition or even among different security domains in the same organization, resulting in a much less effective exploitation of one’s information resources. This is why many products that break these models and allow for information to be exchanged between security domains are becoming more and more common. Such products are referred to as Cross-Domain Solutions (CDS) and are based more on risk reduction rather than risk avoidance12as in the more rigorous security models like Bell-LaPadula. The problem with this approach is that often, although the device controlling the data flow between domains, as a guard [2] can be, may be trusted and certified for high-assurance, its decision to release data is based on the examination of some explicit or implicit feature of the data in transit. Unless all systems from which this data originates have the same level of trustworthiness as the guard, the integrity of the data features on which the guard examination is based cannot be guaranteed. Especially for explicit features like labels, format or other meta-data. Even if the inspection is based on content, an implicit feature, completely automatic solutions must rely on deterministic algorithms that have certain limitations and even with additional manual review the risk that corrupted data might be let through based on erroneous evaluation is still non-negligible. Today most devices in governmental and military offices are equipped with standard commercial devices and software since the risk of information leakage between different security domains is low due to the strong separation. If we were to simply connect these domains with high-assurance devices like guards, we would find ourselves in the situation mentioned above, i.e., with non-trustworthy data sources. However replacing all commercial devices in a security domain with high-assurance ones may not be desirable as one would like to keep the devices users are accustomed to, both for productivity reasons and to protect the investment that was done when purchasing the existing equipment. This is why in this report we investigate possible ways to increase the trustworthiness of such devices by exploiting technology that is already available in most of them and can be used for exactly this purpose.

1 http://www.owlcti.com/pdfs/certifications/UCDMO_Baseline_Inventory.pdf 2 http://www.crossdomain.org/index.php?title=What_is_Cross_Domain%3F

8 FFI-rapport 2014/00920

The technologies we will consider enable ways to dynamically measure and report the integrity of a system in a trustworthy manner. Based on these measurements trust can be established or revoked dynamically to any node in the domain. If a node no longer is trusted to preserve the confidentiality of the information it is handling, it can be excluded from the infrastructure. This can allow to take real-time decisions about whether a system requiring to exchange information with a different domain can be trusted to do so in a manner that does not violate the policy in force. At the same time, a positive proof can be generated that a genuine and trusted process or application was run to execute a specific task, and that its execution was not compromised by a malicious entity. Finally, solutions for distributed data control similar to Digital Rights Management (DRM) can be implemented. We will present some of the most relevant existing technologies and discuss whether current products that implement them, or their underlying concepts, can be of use in designing a security architecture for future flexible exchange of information between security domains.

2 General concepts

2.1 Trust, trustworthiness, assurance and integrity

The concept of trust is not an easy one to define, as it is a subjective property people tend to attribute based on expectations and personal experience. However, when considering trust in the context of computer security, there are some working definitions [3, 4]. Trust is the belief that a system will behave as expected, specific to certain security capability. In general a trusted computing system is a system “that employs sufficient hardware and software integrity measures to allow its use for simultaneously processing a range of sensitive information and can be verified to implement a given security policy”. The set of all protection mechanisms in the system that together enforce a unified security policy are called the Trusted Computing Base (TCB). We say also that a trusted component is one whose failure can violate some of the security properties of the system. Trustworthiness is the base on which we can decide whether to put our trust in the system. With respect to information systems, it expresses the degree to which the systems can be expected to preserve with some degree of confidence, the confidentiality, integrity, and availability of the information that is being processed, stored, or transmitted by the systems across a range of threats. This includes also how well it secures the mechanisms that makes it trusted [5] so that the system is capable of operating within a defined risk tolerance despite the environmental disruptions, human errors, structural failures, and purposeful attacks that are expected to occur in the environments in which the system operates. In some sense how well it protects its integrity. What defines the trustworthiness of a system are security functionality and security assurance, where security assurance is the measure of confidence that the security functionality is implemented as intended and meets some given requirements. Evaluation processes exist that that

FFI-rapport 2014/00920 9

can be used to certify that the required functionalities are implemented and provided under all relevant circumstances. For instance, in the case of Common Criteria [6], there is a product to certify (the target of evaluation) and a security target (which can possibly point to a protection profile (PP) which is more implementation independent) that defines the properties the system should possess to pass the certification. In addition there are (currently) 7 different levels of assurance that can be obtained through the certification, according to how stringent the tests and requirements are for the system. This can maybe also help understand the difference between a trusted system and a secure system. There can be different degrees of trust based on the perceived quality of a system, while the word secure reflects a dichotomy: a system is either secure or not secure [7].

Figure 2.1 CC evaluation process

2.2 The problem of bootstrapping trust [8]

The problem we will be discussing in the rest of the report has to do with establishing trust dynamically to a system that we will have to communicate with in order to exchange information or to use some services it may offer. In this context, although we have a system we know to be trustworthy to some extent, before deciding to actually trust it we need some proof that it is indeed the right system we are interacting with (its identity) and that its integrity has not been compromised since it was deployed. We need to “bootstrap” trust in it. Components that are intrinsically trusted to perform specific security tasks seamlessly to the rest of the system, like a diode or a crypto-box may be, are not in the scope of this report. In such high-assurance systems, physical tamper-proof and tamper-responding mechanisms are often employed so that, once their hardware and software security functionalities are formally verified in an evaluation process, their integrity is preserved by preventing any changes to it. Once both software and hardware have been certified as trustworthy, they are packed as a “black box” which cannot be altered without physically ruining them. In this case, trust can be established just by verifying the identity of the system. Usually by means of a public/private key pair, where the private part is installed in the “box” by the manufacturer and protected by the above mentioned mechanisms, while the public key is made available to the infrastructure as a certificate that can be used to verify the identity of the system and its properties. We put our trust in such systems because we can prove their identity and properties through the certificate, and since they cannot be modified, we can assume that their integrity, and therefore trustworthiness, is preserved. The down-side of this solution is that one quickly ends up with an extremely rigid

10 FFI-rapport 2014/00920

infrastructure that ages very fast, but is too expensive to replace and difficult to manage and to integrate with other components. If the system is designed to be more flexible, so that changes to it are possible like in a commodity PC, we would need to regularly measure and monitor its integrity in order to have some guarantee that its security functionalities have not been compromised since the mechanisms protecting it might not be as effective as those mentioned earlier. In order to do this only one component of the system needs to be really trustworthy: the one measuring and reporting its integrity status. If we can trust this information, then we can decide whether or not to extend our trust to the rest of the system. Such a component would then constitute the root of trust of the system. Even though this component should be certified for high-assurance in order to be trustworthy, it does not mean that the rest of the system will automatically become as trustworthy. It means simply that we can trust the integrity measurements which can then be used as a base to decide whether the whole system is worth of our trust or not.

2.3 Trusted boot process

The most straightforward application of the idea of extending trust to the system starting from a trustworthy root of trust is the secure (or authenticated) boot process. The idea was already presented in 1989 in a paper by Gasser et al. [9] and further refined in [10]. Some immutable boot component, stored in a read-only memory chip, measures the operating system (OS) before launching it, and continues execution only if the measurement matches some given policy. In this context a measurement can be either a cryptographic digest (or hash) of the binary file containing the OS, and the policy is the expected digest value, or a signed binary file, with the policy being that it matches a given public key. If the binary file that has to be loaded to launch the OS passes the policy check, then the boot process launches the OS and passes control to it. If not, it means that someone replaced or compromised the OS binary and the boot process halts. In order for this process to be trustworthy, both the boot code and the digest value must be stored in a safe place where they cannot be changed, the code must be correct and it must not be possible to bypass the verification. Therefore what is needed to implement a secure boot are:

1. A secure storage location 2. A root of trust for measurement 3. A mechanism ensuring that the root of trust is always executed when the machine boots

A chip with read-only memory and programmable non-volatile RAM protected by hardware security mechanisms would satisfy the first two requirements. The third would be satisfied by wiring the chip on the motherboard so that it would always execute as the first component at boot time. Solutions to establish trust in a platform by using only software-based techniques have also been investigated in the literature [11, 12, 13, 14, 15]. The main approach consists in using a verification function that calculates a checksum over the code present in the entire device memory. This assumes the presence of an external verifier that must know the expected content of the memory and the exact hardware configuration of the device. If these requirements are

FFI-rapport 2014/00920 11

satisfied, then the verifier can challenge the device to report a checksum over random memory locations and measure whether the expected time used to report the calculated value is within some expected range. A wrong checksum, or an excessive response time could indicate the presence of an attacker that manipulated the memory content. Clear disadvantages of this approach are that: the verification function must be as optimized as possible while being cryptographically strong so that an attacker cannot create a faster version to manipulate code within the given time range; an intimate knowledge of each device is needed in order to establish the correct acceptable time range; the device must be simple (like embedded devices) and have only one processor in order to be able to verify all memory where malware might hide; a request-response protocol is required because secrets cannot easily be stored securely and reported on demand [16, 17, 12]. On the other hand a software-only solution has the advantage to be easily replaceable if a problem is found, attests software at run-time rather than at load-time and is much cheaper and lighter than hardware-based alternatives. Although hardware and software based root of trust might be complementary, in this report we will consider only the former as it appears to have been recognized as the more mature and preferable solution at the moment [18]. A problem with the secure boot presented above is the little flexibility of the process. If the OS binary must be upgraded, the corresponding hash should also be changed or the system would stop working. Similarly, if the binary is compromised or corrupted, the system becomes unusable. Sometimes a compromised system might still be usable as long as, for instance, only services that are not security-critical are used or access to the network is blocked. Besides, there is no way for a third party to verify that the platform actually booted in a secure way. That is why another type of boot process called trusted boot has been considered. Here the measurements are taken and stored in a hardware protected storage where they cannot be modified, and the list of approved software and hardware is stored in an external DB and checked against the stored measurements by those who want to interact with the system. The measuring and storing process is linear: from the root of trust, to the BIOS, to the boot loader, to the OS. Each component is first measured, then launched and finally given the responsibility to measure the next component in the boot process. Assuming a trustworthy root of trust for measurement, it will always be possible to detect anomalies in the components of the boot chain. The reason is that even if some component of the boot chain is compromised and stores a fake measurement in the hardware root of trust, it will always be correctly measured before it is run by the component that ran before it, and this measurement will reveal that a compromised binary was used. Of course the previous component could also be compromised and lie about measuring a compromised binary, but we can use the same argument recursively until we arrive to the very first component in the boot chain, namely the core root of trust for measurement. If this component is really trustworthy, immutable and always run, than at least the first measurement will always be correct and a potential compromised component detected since it will always be correctly measured.

12 FFI-rapport 2014/00920

The main difference between secure and trusted boot is that in the latter the system is allowed to boot even though some components have the wrong measurement, since no reference measurement is stored on the system.

Figure 2.2 The trusted boot process and how compromised components are always detected

An issue with both these approaches is that the integrity of the system is measured only up to the operating system. Once the OS is in charge, an attacker can still manage to exploit some vulnerability in the OS itself and compromise its integrity, even though the trusted boot process was successful and the system booted with an approved software configuration. Whether that software has some unknown vulnerability or an attacker managed to install malicious software after a successful boot is a different and much harder problem that has to do with run-time integrity. What we considered now is referred to as a static root of trust, because it can be used only at boot time by creating a chain of transitive trust among boot components.

2.4 Isolation and Trusted Execution

In order to solve the problem of run-time integrity mentioned in the previous section, a Dynamic Root of Trust has been proposed [19]. The idea is to have hardware-based mechanisms that can, at any moment, sanitize the computing environment and allow critical services or applications to run in isolation from a potentially compromised pre-boot environment. With this approach the trust in the platform can be re-established when needed without having to perform a full reboot, hence the word dynamic. The new secure environment where critical software can run in isolation from the rest of the system is called a Trusted Execution Environment (TEE) and will be discussed in Chapter 5. Again, the guarantee in this case is also that the intended software is running in isolation on the platform, not necessarily that it is impossible to compromise it if some vulnerability were found in the code. Software-based approaches mentioned in the previous section can also be used to achieve a similar un-tampered execution environment at run-time, although with the limitations we mentioned.

FFI-rapport 2014/00920 13

2.5 Secure Storage

Other advantages of having a trusted hardware component over software-only solutions are that the measured platform configuration can be securely stored and used also as an extra parameter to enforce policies based on given approved configurations. Secrets would then be protected in case the integrity of the platform was compromised. For instance, a cryptographic key could be released to the system only if the current platform configuration matches the one in the key policy (which is also protected by the same trusted hardware), even though other authentication credentials are provided correctly (e.g., password). So an attacker that stole a password by installing a key-logger on the system, would be unable to use the password to access any user private key because the presence of the key-logger itself resulted in a different platform measurement. The Evil-maid attack [20] is a possible way to circumvent this protection, but it requires having repeated physical access to the device, which usually is beyond the intended protection offered by such trusted hardware devices. The idea is to first install a key-logger with a USB stick (first physical access). Then present a fake login screen to the user to steal the password and fake a “wrong password” message since the computer cannot start with the key-logger on it. Finally uninstall the key-logger and restart the machine normally so that the user will think she simply typed the wrong password the first time. The stolen password will have to be stored in clear on the machine itself, so the attacker will have to physically steal the machine in order to use it (second physical access). A dynamic root of trust should in theory defend against this attack if the user credentials were to be entered after the OS was launched and not in the pre-boot environment.

Figure 2.3 Example of sealing. The PCR registers which will be introduced later, is where the system measurements are securely stored.

14 FFI-rapport 2014/00920

3 Trusted Computing and TPM The trusted hardware component described in Section 2.3 has indeed been specified and can be found in millions of commercial devices. It is known as Trusted Platform Module [21], and it is the main component of a technology known as Trusted Computing, developed and promoted by the Trusted Computing Group (TGC)3. Its goal is to provide the means for a platform to measure and attest its integrity, so that a third party can verify such measurement and establish trust in the platform. Trust that a certain hardware and software configuration is present and therefore some specific security mechanisms will be enforced.

3.1 Short history

The first efforts towards a hardware-based mechanism that could increase trust in a commodity platform were started by the Trusted Computer Platform Alliance (TCPA)4, founded in 1999, that defined the first specification for a Trusted Platform Module, also known as TPM, in 2001. This was supposed to be the first component of a trustworthy platform architecture. The initiative received strong criticism partly because of its association with the Next Generation Secure Computing Base (NGSCB also known as Palladium) proposed by Microsoft. The main critic was that the system was thought only to restrict the users’ rights on their machines, and the applications and data that could run on them, rather than to provide security5. Digital Rights Management (DRM) and vendor lock-down were indeed a possible application of this technology, but the main objective of the TCPA was to provide a computer with the means to measure its own integrity and report it to a third party in a trustworthy manner, and to protect its secrets in case its integrity was compromised. Privacy was also a concern as having a hardware component with a unique identity certificate could be used to identify specific platforms and ultimately users. Due to the negative publicity, in 2003 a new alliance was formed to carry on the TPM specification and try to bring forth a better image for this initiative, the TCG. As Today there is an estimate that almost 1 billion TPM chips have been shipped and integrated in existing devices [22] and the specifications for the TPM 2.06 are in their final draft. In addition, many other workgroups have been established to work on other parts of the trusted architecture envisioned by the TGC like the Trusted Network Connect (TNC) group7, and the Mobile Platform group for the Mobile Trusted Module (MTM)8.

3 http://www.trustedcomputinggroup.org/ 4 http://mako.cc/talks/20030416-politics_and_tech_of_control/trustedcomputing.html 5 http://www.gnu.org/philosophy/can-you-trust.html 6 http://www.trustedcomputinggroup.org/resources/tpm_library_specification 7 http://www.trustedcomputinggroup.org/developers/trusted_network_connect 8 http://www.trustedcomputinggroup.org/developers/mobile

FFI-rapport 2014/00920 15

3.2 TPM architecture and features

The TPM specifications have gone through various revisions since the last TCPA version in 20029, but the main features remained unchanged. The main goal has always been to have a cheap hardware component, compatible with most existing commodity platforms, which could provide the following security features:

• Integrity measurements • Trusted Boot • Sealed storage • Remote Attestation • Isolated Execution

The specifications do not dictate that the TPM must be implemented as a chip, but, never the less, this is its most common incarnation, and so for the time being we will refer to it as a separate hardware component. The TPM alone is simply a passive crypto-processor that can generate, protect and use cryptographic keys in an environment isolated from the rest of the system. Alone, it cannot take any action that can influence the operations being executed on the system, exception given for the fact that it might not release a key needed to decrypt some secret. In this regard, it might be compared to a smart-card, but there are differences. For one thing, the TPM is a part of the platform TCB, and therefore it can be used to keep track of system changes from the very beginning of the boot process. For the other it contains something called Platform Configuration Registries (PCRs), which are used for exactly this purpose. Figure 3.1 shows the TPM internals.

Figure 3.1 TPM internals.

9 http://www.trustedcomputinggroup.org/files/resource_files/64795356-1D09-3519-ADAB12F595B5FCDF/TCPA_Main_TCG_Architecture_v1_1b.pdf

16 FFI-rapport 2014/00920

3.2.1 Platform Configuration Registers (PCR)

A PCR is nothing more than a 20 bytes register, but it has the fundamental property of not being resettable unless the platform itself is reset. In other words, one can write as many times as one wishes in the same register, but the value that was previously stored is not lost, it is aggregated with the new one. This is possible because a PCR can only store the result of a SHA-1 operation, which is exactly 20 bytes, and when a new value has to be written, it is hashed together with the old one, and only the result of this operation is stored. For instance, if one wanted to store the hash value “𝑎3𝑓42𝑎𝑓𝑗𝑖𝑚399𝑗𝑣8𝑓90𝑒3𝑖2” in the PCR1, the TPM would execute the following operation:

𝑃𝐶𝑅1 = 𝑆𝐻𝐴1(𝑃𝐶𝑅1||𝑎3𝑓42𝑎𝑓𝑗𝑖𝑚399𝑗𝑣8𝑓90𝑒3𝑖2) where the || operator indicates concatenation. The value found in PCR1 after this operation will not be “𝑎3𝑓42𝑎𝑓𝑗𝑖𝑚399𝑗𝑣8𝑓90𝑒3𝑖2”, but the aggregated hash of the old and new value, even if this was the first time something was written in this registry since PCRs are initialized to 20 bytes of 0s. The most common use of PCRs is storing the measurements of the components involved in the boot process, but any value can be extended in these registries, typically to allow for integrity verification of some file stored outside the TPM. A detailed log file describing the platform configuration could be generated during the boot process, while the aggregated hash of its entries could be incrementally stored in the PCRs and used as a checksum to detect unauthorized modifications. Unlike normal files, in fact, PCRs cannot be arbitrarily altered by a remote attacker or a malware thanks to the hardware-enforced protections.

Figure 3.2 PCR values can be used to verify the integrity of the log file containing the same measurements, but with a human-readable description of their meaning

FFI-rapport 2014/00920 17

3.2.2 Trusted boot and Core of Root for Trust for Measurement

Thanks to the TPM, a trusted boot as described in Section 1 can be executed by storing the measurements in the PCRs and in a log file simultaneously (Figure 3.3). Then the PCR values can be used to verify the integrity of the easier to read log file. The problem remains of how a third party can trust that the PCR values received from a platform actually came from a TPM. To solve this problem, a root of trust for reporting is needed. This is discussed in the following section. The issue of how to bootstrap trust in the platform, i.e., how to provision a trustworthy core root of trust for measurement (CRTM), has been solved by equipping the first piece of the system BIOS, the Boot-Block, with measurement capabilities [23, 24]. This solution is unfortunately not optimal, as it means that BIOS manufacturers must implement their own CRTM, resulting in non-standardized code often prone to bugs. In addition, this code can be flashed just like the rest of the BIOS. Therefore the CRTM is not as immutable as it should, potentially compromising its integrity. Ideally the CRTM should be part of the TPM, but booting a platform from the TPM would require an architectural change which is not a reasonable solution for commercial products.

Figure 3.3 Trusted boot with TPM

3.2.3 Remote Attestation and Core of Root for Trust for Reporting

In order to report the values stored in the PCRs, a mechanism must be in place to authenticate both the TPM and the values being reported. A third party must be able to verify the measurements actually come from a genuine TPM and that they have been stored protected in the PCRs. The problem is solved in the usual manner: with certificates. We trust the manufacturer of the TPM to install a unique certificate in the chip called Endorsement Certificate (EC), in turn signed by a trusted Certification Authority like Verisign, and the corresponding private Endorsement Key (EK). Such certificates must be used exclusively to certify TPMs, and the private key must be installed at production time, with no possibility to be extracted, if not by physically tampering with the chip.

18 FFI-rapport 2014/00920

In theory, the EC could be used to verify that a real TPM in possess of the corresponding private key and built according to good specifications is communicating with us, while the private EK could be used to sign the PCR values and prove their authenticity. However, out of privacy concerns, it was made so that the private EK cannot sign anything, so a second (anonymous) key with corresponding certificate must be dynamically issued to a TPM to protect its identity. This new key will be used for attestation purposes and it is therefore called Attestation Identity Key (AIK) and the corresponding certificate AIK certificate. A TPM can have as many AIK as whished, but only one EK.

Figure 3.4 AIK creation process

The protocol to issue AIKs, shown in Figure 3.4, uses what is called a Privacy CA (PCA) [25]. The TPM generates an AIK pair and sends the public part together with the EC to the PCA (possibly encrypted with the PCA public key). The PCA verifies the validity of the EC, and generates an AIK certificate for the TPM. However, the fact that the EC is valid only means that it was issued to some real TPM, not that an actual TPM is communicating with the PCA. That is why the AIK certificate is encrypted with the public EK before being sent back to the alleged TPM. In this way only the owner of the private key (hopefully the TPM) will be able to decrypt and use the AIK for attestation purposes. An extra protection mechanism is that a TPM will decrypt only an AIK certificate corresponding to an AIK that was generated by the TPM itself. This is because if an attacker simulated the request from the TPM to the PCA and tried to certify a fake AIK (generated externally in order to have access to its private portion), he or she would have to make the TPM decrypt the returned encrypted AIK certificate, but then the decryption would fail. Notice that the attacker would have control of the TPM, so that the EC could be easily extracted and the PCA public key is public by definition. What makes an AIK a root of trust for reporting is the fact that it can only be used to sign values stored in the PCRs and other keys generated by the TPM, not external data. The hardware in the TPM will enforce this property and this is why it is a hardware-based root of trust, together with the AIK certificate, which can be decrypted only by a genuine TPM, and vouches for the authenticity of the AIK. Once the TPM has an AIK with corresponding certificate signed by a

FFI-rapport 2014/00920 19

trusted PCA, it can send the signed PCR measurements so that a third party can use them to verify the integrity of the platform. The most common method to do this is by matching the measurements against a DB containing the hash measurements of all the approved software (Figure 3.5). An alternative protocol for attestation that tries to address the privacy concerns posed by the uniqueness of the EC, which renders a specific TPM identifiable, is the Direct Anonymous Attestation (DAA) protocol [26].

Figure 3.5 Remote Attestation Protocol

3.2.4 Secure Storage, Sealing and Core of Root for Trust for Storage

The most immediate security mechanism offered by a TPM is secure storage. Being essentially a crypto processor, it can generate encryption keys on the fly and use them in a shielded memory location inside the chip itself. However, since this type of memory is limited, only asymmetric encryption operations are performed by the TPM, while symmetric keys are exported to encryption software which is trusted to perform encryption of larger amounts of data. What the TPM offers in this case is a strong key chain to protect the symmetric key when not in use, based on a hardware root of trust for storage. The idea is that if the TPM had to perform symmetric encryption internally, we would have to protect the clear-text before it reached the TPM, and if we can do that, then we can very well protect the encryption keys instead. In total there are five types of keys a TPM can generate, plus a special key called the Storage Root Key (SRK) that constitutes the root of trust for storage. The SRK is a 2048-bit RSA key generated when the ownership of the TPM is established for the first time and always guaranteed to be present in the TPM. Any other key generated afterwards, will be in a key-tree where the SRK will always be the root. In the tree every key will be encrypted with its parent, and will in addition require a password called AUTH secret (because it is used both for authentication and authorization purposes) to be used. In other words, in order to use a key in the tree, the user must know the AUTH secret of all its ancestors up to the SRK. This allows to create different branches for different users as the SRK secret is usually well known (the default value is 16 zeroes).

20 FFI-rapport 2014/00920

The five different types of keys are:

• Identity Keys: the AIK mentioned earlier used to prove the identity of a TPM and sign PCRs and other TPM keys. They can be stored only as direct children of the SRK to make sure that they actually belong to a specific TPM.

• Storage Keys: 2048-bits RSA keys used to encrypt other TPM keys on the system. They can be used to partition the storage among different users.

• Binding Keys: 2048-bits RSA keys used to encrypt symmetric keys which can then be used to encrypt the actual data. They are used also to receive secrets from other parties so that only the TPM can decrypt them with the unbind command.

• Signing Keys: 2048-bits RSA keys used solely to sign data. • Legacy Keys: 2048-bits RSA keys used to both sign and encrypt data. They exist to offer

compatibility to legacy application, but their use is not recommended. The reason for so many keys is that it is considered a bad practice to use the same key for different tasks, e.g., encryption and signing.

Figure 3.6 TPM Key hierarchy from [27]

TPM keys are more generally divided also in migratable and non-migratable keys. Migratable keys can be exported from the TPM encrypted with the EK, so that the manufacturer can assist with data recovery process by decrypting the keys and re-encrypting them with the new TPM’s EK (the manufacturer must then be trusted with a copy of the TPM private key). Keys that are defined as non-migratable cannot be exported and in case of a malfunctioning will be lost together with the data they encrypt. The SRK and AIK are always non- migratable as they are roots of trust that must be specific to a unique TPM. However no implementation of a Migration Authority needed to perform the migration service is in practice available to date.

FFI-rapport 2014/00920 21

3.3 TPM security

The highest certification reached by a TPM based on the TCG PP [28] is EAL 4+1011, and it might be too low for some military or highly critical applications. Besides only one manufacturer we are aware of, namely Infineon, provides the TPM with the necessary certificate to perform remote attestation, i.e., the Endorsement Certificate. Still, according to the TGC specifications, the TPM should contain four more certificates as shown in Figure 3.7 taken from [27]. A new protection profile seems to be under definition12, but it is not clear which assurance level it will target. In general there might be skepticism regarding the fact that an external manufacturer has to be trusted to generate and install the endorsement certificate in the TPM. Attacking the production chain at that stage might be feasible for attackers with enough resources like a foreign state, and it would completely compromise the security of the TPM since the root of trust for reporting would no longer be trustworthy. In the coming subsections we review some known weaknesses and possible attacks.

Figure 3.7 TPM certificates according to the TPM specifications [27]

10 http://www.st.com/web/en/catalog/mmc/FM143/CL1814/SC1522 11 http://www.infineon.com/cms/en/corporate/press/news/releases/2009/INFCCS200912-015.html 12 https://www.niap-ccevs.org/pp/draft_pps/

22 FFI-rapport 2014/00920

3.3.1 Problems with the Core Root of Trust for Measurement

This feature is currently implemented as part of the BIOS, not as part of the hardware module, and although there is a specification for it, its implementations are not documented since each vendor implements its own. The result is that, although the TCG specifications define what should be stored in each PCR, it has been shown that what is measured in practice is much less than required [19, 29]. Being able to subvert a root of trust has, as easy to imagine, disastrous consequences. Without a root of trust, the whole system is compromised because nothing can be trusted anymore. All mechanisms, applications and services based on the PCR measurements are compromised because now an attacker can actually fake the measurements stored in the TPM, hence invalidating the fundamental property that gave us trust in the first place. Of less severity, but still worth of mention, are the implementation flaws found in the various boot components that form the chain of trust, like the BIOS itself and the boot loader, as pointed out in [19].

3.3.2 Physical attacks

The TPM has been specifically designed to resist software-based attacks, which means that if an attacker has physical access to the module or the platform its integrity might be compromised. However, simply extracting secrets like private keys from the TPM is not that straightforward even for the legitimate owner, although the TPM does not employ any tamper-proof mechanism. One such successful attempt has been reported so far13 14, but it took several months of work and it is effective only for the specific model that was considered15. Having physical access to the machine hosting the TPM allows also for the Evil Maid Attack we mentioned earlier, where a hotel maid could have access to a laptop left in a hotel room, install a malware to steal the TPM password from the user, and at a later time, recover the laptop that now can be accessed with the stolen password (once the key-logger has been removed). Although there are mitigations for this attack, not all scenarios can be accounted for16. Another problem with an early version of the TPM, was that the PCR values could be reset without restarting the computer, hence an attacker could fake any configuration. Cold boot attacks [30] are also possible as symmetric keys are not protected inside the TPM, but released to RAM when the platform configuration matches the one in the key policy. Although some mitigations were released by the TCG [31], the attack is not completely avoidable. If an attacker can listen to the communication on the bus where the TPM is connected, other types of attacks are also possible despite the encrypted communication [32].

13 https://www.youtube.com/watch?v=Qk73ye2_g4o 14 https://www.youtube.com/watch?v=h-hohCfo4LA 15 https://www.trustedcomputinggroup.org/community/2010/02/black_hat_conference_report_about_tpms 16 http://theinvisiblethings.blogspot.no/2009/10/evil-maid-goes-after-truecrypt.html

FFI-rapport 2014/00920 23

Finally, a very difficult challenge is how to prove to a user that the machine being used is indeed equipped with a TPM, i.e., how can a TPM authenticate to a user? The “Cuckoo Attack” described in [33, 34, 8], shows that another fundamental issue with the TPM is that there is no reliable way to prove to the user that an actual TPM is installed and is being used on their machine. Although public key certificates are a good way to authenticate two machines to each other, they are not easily verifiable by a person. One could think to use a token with preloaded AIK certificates to query a machine and test for a valid TPM, but even in this case one thing cannot be verified: the TPM proximity. A corrupted machine could, in fact, forward attestation requests to a good machine and use the responses as its own, hence tricking the user into disclosing some secrets. This is the Cuckoo attack. Possible approaches to mitigate the attack are discussed in [34] and one possibility a timing side-channel to test for proximity is evaluated in [35]. This problem affects especially users that need to trust a machine that is not under their control, as for instance, an ATM where they need to draw some cash.

3.4 TPM adoption and support

As mentioned in the beginning of this chapter, the TPM has been around for over 10 years and can today be found in something like 1 billion devices. However, there are still very few applications and systems that fully leverage its functionalities. Still, in [22] the author argues that the TPM prime time is starting right now and its adoption is quickly going to increase. The reason being a tighter integration with Windows based systems, a greater need for security especially in mobile devices, new threats and wider industry support. NIST has also published a series of reports [18, 24, 23] where the TPM and its functionalities are identified as essential to secure modern devices, and even the U.S. Department of Defense has instructed that TPM be part of new acquired equipment [36].

3.4.1 TPM support

The TPM can be used by various components in a platform, and specific drivers and software stacks are needed for each of them. Most BIOS vendors provide TPM enabled firmware including a Core Root of Trust for Mesasurement, and although no documentation is available for the specific implementations, one can expect that they follow to some extent the TCG recommendations [37]. All other components involved in the trusted boot must also support the TPM in order to be able to take measurements and store them in the PCRs. TrustedGRUB17, OSLO [19] and tboot18 are examples of “trusted” boot-loaders. The Trusted Software Stack (TSS) defined in [38] to allow the OS and the applications running on it to communicate with the TPM has been implemented in both C with Trousers19, Java with the JSR 321 [39] and C++ with µTSS [40].

17 http://sourceforge.net/projects/trustedgrub/ 18 http://sourceforge.net/projects/tboot/ 19 http://trousers.sourceforge.net/

24 FFI-rapport 2014/00920

Windows 8 incorporates better support to administrate the TPM and it implements a measured boot with attestation capabilities20. Chromebook also uses the TPM to perform a verified boot, although with certificates rather than measurements, and uses the protected memory of the TPM to store the public certificate used to verify firmware signatures21. Various PC vendors ship also their devices with integrated security tools based on the TPM secure storage functionalities. Examples are BitLocker22 and HP Embedded Security23. A Linux kernel module to measure running processes and applications at run time has been developed by IBM under the name of IMA (Integrity Measurements Architecture)24 . Proposals to integrate TPM support in the EAP-TLS protocol have also been published [41]. Many other proposals to integrate TPM support in various systems and protocols also exist in the literature, but since for the vast majority they remain only ideas and proof of concepts, we will not list them here.

3.4.2 TPM challenges

In general, what is missing is a standardized and widely accepted infrastructure surrounding the TPM, and although much work is in progress and more and more standards are being produced, nothing is yet deployed on a large scale. No PCA or Migration Authority to enable attestation and migration services seems to exist. Actually not even complete functional implementations that could be locally deployed are available. The problem of revocation for compromised TPMs is still open because of the privacy implications and the multiple Certificate Authorities (CAs) involved, and the definition of a PKI infrastructure to support a TPM life-cycle is in itself a major challenge [42]. Various shortcomings of the platform attestation feature have also been pointed out [43]. Among its problems there is the lack of a clear standard to report the measurements, the bad scalability, (since huge databases with all possible approved configurations must be maintained to allow for the remote verification) and the ability to discriminate platforms based on the software they are running. Alternative attestation methods have been proposed [44, 45], but none has made it into a widely accepted standard or actually solved the problem. Practical problems with provisioning have also been observed as the process to activate and take ownership of a TPM are not straightforward and involve access to BIOS settings and installation of dedicated drivers and libraries. With Windows 8 and tighter integration with new OSes this problem should gradually disappear. Finally, upgrading and patching a system can lead to problems as the values recorded in the PCR might change and compromise the secure boot process and render sealed secret inaccessible.

20 http://technet.microsoft.com/en-us/windows/dn168169.aspx 21 http://www.chromium.org/chromium-os/chromiumos-design-docs/verified-boot-crypto 22 http://msdn.microsoft.com/en-us/library/windows/hardware/gg487306.aspx 23 http://h20331.www2.hp.com/Hpsub/downloads/HP_ProtectTools_Embedded_Security.pdf 24 http://researcher.watson.ibm.com/researcher/view_project.php?id=2851

FFI-rapport 2014/00920 25

3.4.3 MTM, TPM 2.0 and Software TPM

Other TPM incarnations are being defined. Specifications for a new TPM (2.0)25 are being finalized and major differences from the current 1.2 version include the flexibility of the algorithm used in the TPM, the ownership hierarchy and simplified management. A Mobile Trusted Module (MTM26) has long been under definition, but there does not seem to be any real progress. IBM has also been working on a way to virtualize the TPM, the vTPM, so that it can be used by multiple Virtual Machines in a virtualized environment [46]. A software TPM emulator is also available27.

3.5 Other Hardware Crypto Modules

Hardware Security Modules (HSM) are computing devices used to generate and protect cryptographic keys and are also often equipped with crypto-processors. The IBM 4758 series, discontinued in 2007 and replaced with the 4764/5 series28 is an example, but a more exhaustive list is available29. HSM are often certified for high levels of assurance and are equipped with tamper-resistant mechanisms. Like a TPM they can perform cryptographic operations and store key material securely, but in addition have some mechanisms that prevent the disclosure of secrets in case of physical tampering. Another difference is that they are not thought to enable Trusted computing capability, i.e., secure the boot environment; provide sealing based on platform state; and perform remote attestation. Smart cards are also kind of HSM, but are portable rather than bounded to a platform, and are usually employed to authenticate users rather than the platform (an extensive discussion about the differences between smart cards and TPM can be found in [47]). There are also a series of research projects defining different types of such device, but we limit ourselves to actual available commercial products.

4 The Trusted Execution Environment As mentioned in section 2.4, the TEE is an attempt to exclude the BIOS and other less trusted components from the TCB and to enable a more flexible and dynamic type of root of trust for measurement (DRTM). TEE is not a replacement of TPM-like modules, but a complementation. While a trusted execution environment is established with the help of the CPU and proprietary signed firmware from the manufacturer, the TPM is still used as a root of trust for reporting and storage. However, as noted also in [48], the TPM does not necessarily need to be a chip, but it can be implemented as a software component, as long as the platform provides sufficient security mechanisms. A properly implemented TEE could allow this solution, especially convenient for mobile and embedded devices.

25 http://www.trustedcomputinggroup.org/resources/tpm_library_specification 26 http://www.trustedcomputinggroup.org/resources/mobile_trusted_module_20_use_cases 27 http://tpm-emulator.berlios.de/ 28 http://www-03.ibm.com/security/cryptocards/pcixcc/overview.shtml 29 http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140val-all.htm

26 FFI-rapport 2014/00920

In general, a TEE is nothing more than a strong hardware-based isolation mechanism. The main goal of these technologies is namely to protect the execution of a specific application from a potentially compromised system. Therefore we will first describe the most common isolation mechanisms adopted now days.

4.1 Isolation Mechanisms

Isolation techniques can be divided roughly into five categories [49]:

4.1.1 Language based Isolation

This type of isolation is provided by programming languages, language compilers, assemblers and runtime environments. For instance, type-safe programming languages such as Java ensure that programs can access only appropriate memory locations and that control transfer happen only to appropriate program points. The programmer has the burden to write the program in conformance to the type system, and the end-user needs only type-check the code. Certifying compilers or Proof-Carrying Code is another approach that allows attaching security policies to the code and letting the end-user to verify the safety and correctness requirements for the code.

4.1.2 Sandbox based Isolation

Sandboxing is a technique to encapsulate untrusted code such that it may not escape its assigned address space. This is achieved by adding extra checks to the program binary around every jump or store to make sure that no code can be read or written outside the memory segment assigned to the program.

4.1.3 Virtual Machine based Isolation

Virtualization is a technique to isolate not only single programs or processes, but also entire systems by providing a software abstraction of real platforms where code can run as if it was on a physically separated machine. Often multiple virtual machines (VMs) can run on the same physical machine, and an underlying hypervisor with the highest privilege levels takes care of providing and sharing resources among the VMs and mediating access to the real hardware resources. VMs can also be hosted on existing operative systems, but in this case the hypervisor does not run with highest privileges. Finally, hardware-assisted virtualization is when hardware primitives provided by the processor or I/O are used to multiplex resources among VMs, achieving much better performance.

4.1.4 OS-kernel based Isolation

OS-kernel based isolation has been historically the most traditional form of isolation, and for a long time the only one. The operative system kernel has always been in charge to manage resources, enforce security policies and make sure that application did not get unauthorized access to critical components. Efforts to improve the security of monolithic kernels have resulted in a variety of kernel types based on different design choices and requirements. Micro-kernels for instance aimed to minimize the TCB and make it easier to verify its correctness. Hypervisors can be seen as a type of micro-kernel as well as MILS (Multi Independent Level of Security)

FFI-rapport 2014/00920 27

separation kernels both born from the need for strong isolation, although the latter aimed also to ease the development of high-assurance systems. Multi-Level Security (MLS) systems can also be seen as system where the kernel provides strong isolation by means of strong access control mechanisms like Mandatory Access Control to enforce which processes or users are allowed to access resources at different classification levels.

4.1.5 Hardware based Isolation

As mention at the beginning of this section, TEE is in practice a hardware-based isolation mechanism. This form of isolation is usually provided by the processor in conjunction with other hardware components that are “security-aware”. Typically a Memory Management Unit (MMU) is provided to assign well-defined address spaces where specific applications are allowed to execute, together with mechanisms that blocks DMA access to other unauthorized components, like device drivers. Intel TXT [50] together with Intel Vt-d and Vt-x, AMD SVM [51] and ARM TrustZone [52] are examples of hardware isolation technologies, i.e., possible implementations of a TEE.

4.2 DRTM

A DRTM or Dynamic Root of Trust for Measurement is a hardware mechanism to dynamically establish a secure running environment on a platform, without having to go through a disruptive process like a complete reboot. The TCG has published a general description of how such root of trust might be implemented and integrated with a TPM [53]. Intel TXT [50] and AMD SVM [51] implementation have been probably used as a base for the document, but a DRTM and a TPM are not necessarily required to implement a TEE, as we will see in the case of ARM Trust Zone. These are used when attestation capabilities are desired. The AMD and Intel solutions are very similar, so we will describe only the Intel version here, as it is much better documented and because the two technologies build on almost identical principles. In general, from [53]:”The D-CRTM is the Core Root of Trust for the dynamically launched environment that is initiated by a DL event. As a result of the event, the processor is placed into a known good state, the dynamic PCR are reset, and the processor will execute immutable code that performs the initial measurements. The implementation of the immutable code is platform-specific, including its size and its functionality. The D- CRTM ends when the required measurements are extended. The initial measurements SHALL be extended into PCR”. The main difference with other software-based isolation mechanisms is that the TEE is invoked through a special processor instruction called SENTER in the Intel case and SKINIT by AMD. Besides, as a proof that the process was actually performed by the processor and not a malware simulating the DRTM, a special PCR registry is reset, the number 17, this is reserved for privileged hardware. In order to achieve this, an extra locality reserved to the DRTM has been added to the platform privilege hierarchy [54]. Figure 4.1 describes the process called Late Launch that is started when the SENTER instruction is invoked on an Intel platform.

28 FFI-rapport 2014/00920

Figure 4.1 Late Launch technology by Intel

All but one processor are stopped and all external events are masked. The immutable piece of code constituting the core root of trust provided by Intel, the SINIT module, is validated and the PCR 17 is initialized to a special value before the SINIT hash is extended into it. The SINIT code is then loaded and executed to create a secure environment where the MLE (Measured Launch Environment) code can be executed. Among other things the Launch Control Policy (LCP) is also extended in the PCRs and the MLE is launched only if its hash, the version of the SINIT module and the PCR initialized by the S-RTM match the one in the LPC. The LCP itself is protected by the TPM NVRAM so that it cannot be changed by unauthorized entities. Now, besides being able to execute the MLE code in an isolated environment, a third party can also verify that the Late Launch process actually took place and what code was loaded.

4.2.1 Tboot and OSLO

Tboot and OSLO mentioned earlier, are two examples of boot-loaders that can exploit the DRTM provided by Intel and AMD respectively to perform a late launch of the OS. OSLO was designed as a proof of concept to demonstrate how other trusted boot-loaders like Trusted Grub, which uses only a static root of trust, were not sufficient to guarantee that the OS was launched from a secure pre-boot environment. Tboot instead was designed to launch specifically the Xen hypervisor30, which can also leverage the hardware virtualization mechanisms offered by the Intel vPro technology31. Usual Linux kernels can also be used, but they would not exploit all advantages of this technology.

30 http://wiki.xenproject.org/wiki/Xen_Overview 31 http://www.intel.com/content/www/us/en/virtualization/virtualization-technology/hardware-assist-virtualization-technology.html

FFI-rapport 2014/00920 29

The book Intel Trusted Execution Technology for Server Platforms [55] explains in detail how to exploit Intel TXT and Tboot to secure one’s data centers.

Figure 4.2 Static root of trust Vs. Dynamic root of trust

4.2.2 Problems with DRTM implementations

The idea of a DRTM is in general a good one, but the current implementations suffer from a series of problems, in terms both of security and other limitations. We consider only the Intel implementation here as it is the most studied. The main functional issue with the SENTER instruction is that it was designed to support one late launch per boot cycle, that is, to securely launch the OS or hypervisor that will then be used to further secure the running system. Thanks to the LPC it is more flexible than a Trusted Boot, and provides better isolation mechanisms, but it offers essentially the same functionality. If the OS or hypervisor contains some implementation errors or vulnerabilities, they can still be exploited by an attacker to bypass these protections at run time as showed in a series of presentations by The Invisible Thing Lab: the Xen Owning Trilogy32 and another of their papers [56]. The same team went on to show how the Intel TXT technology itself could be bypassed and how the BIOS and therefore the S-CRTM was actually still a part of the TCB, since one had to trust it in order to trust the DRTM [57, 58]. Even pure software attacks were showed to be possible by exploiting a buffer-overflow vulnerability in the SINIT AC module [59]. A better use of a DRTM would be “on demand”, by launching single critical applications in a TEE when needed. Unfortunately only one Late Launch per boot cycle is possible with Intel TXT as there is only one special PCR to attest that the DRTM was used, and no way to register

32 http://invisiblethingslab.com/resources/bh08/

30 FFI-rapport 2014/00920

multiple executions. A notable attempt to circumvent the problem was done by Jonathan McCune with Flicker.

4.2.3 Flicker [60]

Flicker is a framework to develop applications that can be launched executed in both Intel and AMD TEE. They are called PALs (Piece of application logic). Although the idea is very appealing, there are many drawbacks to this approach. The first one is that Intel TXT was not designed to support multiple and frequent launches of the DRTM. Rather the late launch process was thought to take place only once at boot time. This means that Flicker performances are quite poor and stability issues are far from rare. The other important obstacle to its wide adoption is that there is no easy way to run inter-dependent Flicker sessions. When Flicker is run, the current OS must be stopped, its state recorded and restored at the end of the Flicker session. This means that Flicker sessions must be quick not to disrupt the user experience and the next Flicker session will have no memory of what happened in the previous one, unless some data was securely stored and available to the new session. This precludes, for instance, the efficient execution of long and complicated secure services over a network, or secure monitoring applications. Finally, every driver or library needed to support a PAL must be implemented from scratch and included in the Flicker session since no support from the OS is available. It is easy to see that the code needed for more advanced services might become too large, incurring in the usual problem that its correctness becomes impossible to verify. An attempt to overcome some of these problems is the XMHF project [61], which provides a framework to build secure hypervisors. XHMF offers a platform with some formally verified security properties like memory separation, and allows adding additional modules with specific security extensions without breaking its basic security. Among these extensions we find TrustVisor [62] which is in practice a virtualization of Flicker. Intel TXT is used to securely launch XMHF once, and trusting in its security properties a virtual machine with a virtual TPM is created that simulates the secure environment given by Flicker. The main difference being in the fact that now there is little or no overhead per session launched and drivers and libraries are available from the XMHF kernel.

4.3 ARM TrustZone

The ARM TrustZone [52, 63] is another implementation of the TEE by ARM. It is aimed mainly at mobile and embedded devices, and it differs from Intel TXT in the fact that it supports a secure environment alongside a “normal” one, rather than just securely launching one OS. It includes secure software offering basic security services such as cryptography, safe storage and integrity checking.

FFI-rapport 2014/00920 31

4.3.1 General concepts

The basic idea behind TrustZone is to let a Normal World containing a usual rich OS under the control of the user run alongside a Secure World residing in a protected trusted environment. Unlike other TEEs, one can switch these two worlds when needed thanks to a software component called TrustZone Monitor. The platform manufacturer should provide the basic secure blocks of the platform like a secure boot loader and secure drivers and firmware to perform platform authentication, cryptographic services and memory protection33, while the secure software or kernel running in and managing the TEE can be developed according to one’s needs.

The architecture of the secure world software is left open for the developers and ARM suggests three different possibilities in [52]:

• A secure operating system: this is the most complex and powerful option, where a complete trusted OS runs in the secure world and can offer a variety of services to the normal world while running multiple virtual secure worlds.

• A synchronous library: This is the simplest option. “A simple library of code in the Secure World which can handle one task at a time is sufficient for many applications. This code library is entirely scheduled and managed using software calls from the Normal World operating system. The Secure world in these systems is a slave to the Normal World and cannot operate independently, but can consequently have a much lower level of complexity.”

• Intermediate solutions:” There is a range of options that lies between these two extremes. For example, a Secure World multi-tasking operating system may be designed to have no dedicated interrupt source, and as such could be provided with a virtual interrupt by the Normal world. This design would be vulnerable to a denial-of-service attack if the Normal world operating system stopped providing the virtual interrupt, but for many cases this is not a problematic attack. Alternatively, the MMU could be used to statically separate different components of an otherwise synchronous Secure World library.”

33 http://www.fidis.net/resources/fidis-deliverables/hightechid/int-d37002/doc/16/

Figure 4.3 The ARM TrustZone Architecture [63]

32 FFI-rapport 2014/00920

4.3.2 Boot process

The integrity of the Secure World is of course paramount, and the boot sequence has been designed to prevent the compromise of the system until the Secure World is in control, in a fashion very similar to the trusted boot presented in Section 1.3 and the Intel Late Launch.

Figure 4.5 A typical boot-sequence of a TrustZone-enabled processor [52]

The machine starts by loading the pre-boot environment and launching a ROM-based boot-loader that initializes critical peripherals before switching to a device boot-loader that launches the Secure World. The difference with Intel TXT is that at this point the Normal World boot-loader is run, while in with Intel TXT only one OS is launched and it runs always in Secure Mode. Something similar could be achieved by having a hypervisor running in the Intel TEE which in turn loads a rich OS in a virtual machine that would mimic the Normal World. In this case, however, the hypervisor would have to simulate the security extensions provided by TrustZone and the Monitor.

Figure 4.4 Example of the Secure World implementation [52]

FFI-rapport 2014/00920 33

The chain of trust is also establish as in the trusted boot process described in Section 1.3: The process starts from an implicitly trusted component whose identity is verified by a public certificate securely stored on the platform, and trust is transitively passed on to the next component in the boot chain. However, no TPM is explicitly used for the trusted boot since platform integrity is verified by certificates rather than integrity measurements and remote attestation is not a part of the TrustZone design, but can be added to complement the secure architecture. A similar approach is also used on Samsung Chromebooks, where the verified boot uses digital signatures to authenticate the platform, but stores the public certificates in the TPM protected NVRAM to defend against key rollback attacks34.

4.3.3 TrustZone maturity

From ARM’s side there are no available and open implementations of a Secure World, but only API specifications and general guidelines, which will have to be implemented by those interested in taking advantage of the security extensions35. The API specifications are now also being standardized in collaboration with Global Platform to allow for easier compliance and certification of TrustZone based implementations36. The TrustZone Ready Program37 aims to provide support in form of high level security requirements mapped to concrete use cases, blueprints, documents and checklists to ease alignment to industry certification standards and speed up certification. Some available implementations of TEE for TrustZone following the Global Platform specifications are already available on the market: SierraTEE38, CryptoCell by Discertix 39, Texas Instruments, SecuriTEE by Solacia40, and MobiCore by G&D41. Samsung has also developed a secure Android based OS called Samsung Knox42which seems to integrate Trusted Computing concepts like trusted boot and integrity measurements with TrustZone Extentions. Knox has been also used as a base for Samsung KNOX Hypervisor, a collaboration with Green Hills Software, which has been announced to be approved for use on sensitive U.S. Department of Defense enterprise networks43. Unfortunately no open implementations of these solutions seem to be available.

34 http://www.chromium.org/chromium-os/chromiumos-design-docs/verified-boot-crypto 35 http://www.arm.com/products/processors/technologies/trustzone/tee-reference-documentation.php 36 http://www.globalplatform.org/specificationsdevice.asp 37 http://www.arm.com/products/security-on-arm/trustzone-ready/index.php 38 http://www.openvirtualization.org 39 http://www.discretix.com/cryptocell-for-trustzone 40 http://www.sola-cia.com/en/securiTee/product.asp 41 http://www.gi-de.com/fra/en/trends_and_insights/mobicore/mobicore_1/mobicore.jsp, http://m.gi-de.com/gd_media/media/documents/complementary_material/events_1/04_STE_CARTES__Demo_Presentation.pdf 42 http://www.samsung.com/at/business-images/resource/case-study/2013/12/samsung_knox_overview_whitepaper-0.pdf 43 http://www.ghs.com/news/20140317_samsung_certify_government.html

34 FFI-rapport 2014/00920

4.3.4 ARM TrustZone Discussion

Although TrustZone has been around for more than a decade, it was not before now that it raised enough interest to motivate standardization efforts and wider adoption also among developers. A possible reason is probably the enormous penetration of mobile devices in our daily lives, most of which run on ARM processors, and the demand for better security as these devices are used for more and more economic sensitive tasks as financial transactions, paid entertainment and cloud services. Naturally not only the mobile users’ economic needs are at stake, but also the ones of the company providing the paid services. It should therefore not be surprising that secure DRM and virtual payment solutions are among ones being developed. About its actual security, it is worth noting that the PP defined by Global Platform for TEE does not seem to aim at higher assurance levels than EAL 2+44. Besides, TrustZone does not seem to solve the problem of device authentication to the user, which we mentioned also regarding the TPM as the cuckoo attack. This is in fact left as future work in the Protection Profile definition. Although when running an application inside the Secure World one can expect also a trust path between the screen/keyboard and the application, it is not clear how the users can have any proof that the device is actually running in secure mode. The problem of how to provision secure applications to the Secure World is not very clear either. It will probably be handled in a way similar to the iOS model, where a trusted third party (TTP) reviews and signs the applications that can be installed. In TrustZone case however, there are multiple phone manufacturer and application vendors, so there might be a fragmentation of TTPs that could make it difficult to provide a consistent and secure provisioning infrastructure. On the other hand this could also mean that a private company might set up its own TTP if that was desirable, although an open and configurable TEE would then be required. Finally, a proof of concept of a rootkit for TrustZone seems to be already available45, together with some analysis of existing TEE implementations46 and a short guide to ARM exploitation47. Concluding, TrustZone is an interesting technology that can be used to run a secure OS or secure services alongside a rich OS, therefore creating a non-disruptive user experience, but its adoption on a large scale is just starting and more vulnerabilities will most likely be uncovered. At the same time it could provide a promising approach to develop trusted mobile devices without the need to build custom and expensive certifiable solutions, at least for lower classification levels.

44https://www.fbcinc.com/e/iccc/presentations/T2_D1_3_30pm_Lavatelli_Cert_of_the_Trusted_Exec_Env.pdf 45 http://leveldown.de/hip_2013.pdf 46 http://www.sensepost.com/blog/9114.html 47 http://www.exploit-db.com/wp-content/themes/exploit/docs/24493.pdf

FFI-rapport 2014/00920 35

5 Conclusions Trusted computing technology is just starting to take off although it has been around for more than a decade. The fact that it can be used to lock down a device and enforce restrictions on how it can handle data makes it very difficult to be accepted by the general public, but at the same that makes it very suitable for military applications. The same applies to privacy concerns. Still, wider commercial acceptance is positive because it means also larger availability of tools to develop custom solutions, better compatibility with commodity systems and a thorough testing in a wide variety of situations and diverse communities. At FFI we have already started investigating the possibility of using TPM devices in systems with more stringent military requirements to understand whether they can actually bring any added value to their security. Currently we managed to successfully integrate a TPM-based attestation protocol in the tactical Identity Management system Gismo IdM [64], and consequently into a Publish-Subscribe message delivery system with flexible mechanisms for information flow control [65]. We have also proposed a simplified TPM for attesting genuineness in embedded devices [66]. Next we are planning to study how TPM and TEEs might be used in the context of flexible information exchange between security domains, in particular data labelling and Attribute Based Access Control (ABAC) in conjunction with high-assurance guards. In the same context some NATO projects have also started considering TPM modules for platform authentication [67], but we believe that TPM and TEE technologies can also be used for trusted binding of metadata (or labels), which is currently a major challenge, as indicated also in [68]. In fact, although standards to define metadata and binding mechanisms have been proposed [69, 70, 71], only proprietary or classified implementations exist about which little is publically known. Potential other applications of the technology reviewed in this report are vast, as they can be used to harden security and trust in virtually any device and service. While Intel TXT might be used to develop secure solutions for commodity server and desktop workstations, ARM TrustZone, which is aimed at mobile devices, might be used to design secure solutions in relation to the BYOD problem. This could for example be useful in emergency situation scenarios where civilians need to collaborate with military forces and share resources. TPM in all its incarnations can provide better secure storage to key material and classified information than any commodity computer with software only encryption can today. Remote attestation can be used to dynamically build trusted enclaves in a heterogeneous network, and much more. In general we believe that these technologies can and should be used to further secure commodity computers which are already available and deployed, while the concepts on which they are based could be used to design more flexible high-assurance equipment that can be then certified at the desired assurance level. At this regard a Protection Profile that models trusted computing support in high assurance kernels has been evaluated and certified at EAL 5 [72], although no product seems to have been certified according to this PP yet. NSA also recognized the need to adopt

36 FFI-rapport 2014/00920

trusted computing technologies as they recently announced that they are prepared to permit adoption of TPM for National Security systems48 and made it official shortly after [36]. Concluding, despite some open challenges, TPM, TEE and similar technologies could be highly beneficial in the development of a more robust and secure network based infrastructure and investigation on how to adopt them in a military context should be pursued.

6 Bibliography [1] E. D. Bell and J. L. La Padula, "Secure computer system: Unified exposition and Multics

interpretatio," Mitre Corporation, Bedford, MA, 1976. [2] R. Haakseth and M. Andreassen, "Oasis demonstration – secure information exchange between

military and civilian systems," FFI, Kjeller, NO, 2008. [3] NIST, "NISTR 4659: Glossary of computer Security Terminology," NIST, 1991. [4] NIST, "NIST Special Publication 800-53 Rev. 4: Security and Privacy Controls for Federal

Information Systems and Organizations," NIST, 2013. [5] P. Veríssimo, M. Correia, N. Neves and P. Sousa, "Intrusion-Resilient Middleware Design and

Validation," Handbooks in Information Systems, vol. 4, pp. 615-678, 2009. [6] Common Criteria, "Common Criteria for information Technology Security Evaluation Version 3.1

Rev. 4," 2012. [7] C. P. Pfleeger and S. L. Pfleeger, Security in Computing 3rd Ed., Upper Saddle River, NJ, USA:

Prentice Hall PTR, 2003. [8] A. Perrig, B. Parno and J. M. McCune, Bootstrapping Trust in Modern Computers, Springer, 2011. [9] A. Goldstein, B. Lampson, C. Kaufman and M. Gasser, "The Digital Distributed System Security

Architecture," in National Computer Security Conference, 1989. [10] W. A. Arbaugh, D. J. Farber and J. M. Smith, "A Secure and Reliable Bootstrap Architecture," in

Proceedings of the 1997 IEEE Symposium on Security and Privacy, 1997. [11] R. Kennell and L. H. Jamieson, "Establishing the Genuinity of Remote Computer Systems," in

Proceedings of the 12th Conference on USENIX Security Symposium, Berkeley, CA, USA, 2003. [12] Q. Yan, J. Han, Y. Li, R. H. Deng and T. Li, "A Software-based Root-of-trust Primitive on Multicore

Platforms," in Proceedings of the 6th ACM Symposium on Information, Computer and Communications Security, New York, NY, USA, 2011.

[13] A. Seshadri, M. Luk, E. Shi, A. Perrig, L. van Doorn and P. Khosla, "Pioneer: Verifying Code Integrity and Enforcing Untampered Code Execution on Legacy Systems," SIGOPS Oper. Syst. Rev., vol. 39, no. 5, pp. 1--16, 2005.

[14] A. Seshadri, A Software Primitive for Externally-verifiable Untampered Execution and Its Applications to Securing Computing Systems, Pittsburgh, PA, USA: Carnegie Mellon University, 2009.

[15] A. Seshadri, A. Perrig, L. v. Doorn and P. Khosla, "SWATT: softWare-based attestation for embedded devices," in Proceedings of the 2004 IEEE Symposium onSecurity and Privacy, 2004.

[16] C. Castelluccia, A. Francillon, D. Perito and C. Soriente, "On the Difficulty of Software-based Attestation of Embedded Devices," in Proceedings of the 16th ACM Conference on Computer and Communications Security, New York, NY, USA, 2011.

[17] A. F. a. C. C. a. D. P. a. C. Soriente, "Comments on ”Refutation of On the Difficulty of Software-Based Attestation of Embedded Devices”," 2011.

48 http://www.trustedcomputinggroup.org/media_room/news/331

FFI-rapport 2014/00920 37

[18] C. Lily, F. Joshua and R. Andrew, NIST Special publication 800-164: Guidelines on Hardware-Rooted Security in Mobile Devices (Draft), NIST, 2012.

[19] B. Kauer, "OSLO: Improving the security of Trusted Computing," in Proceedings of the 16th USENIX Security Symposium, Boston, MA, USA, 2007.

[20] S. Türpe, A. Poller, J. Steffan, J.-P. Stotz and J. Trukenmüller, "Attacking the BitLocker Boot Process," in Proceedings of the 2nd International Conference on Trusted Computing (Trust 2009), Oxford, UK, 2009.

[21] TGC, "TPM Main Specifications - Part 1 Design Principles Vrsion 1.2 Rev.116," Trusted Computing Group, 2011.

[22] G. Shpantzer, Implementing Hardware Roots of Trust: The Trusted Platform Module Comes of Age, SANS Analyst Program sponsored by TCG, 2013.

[23] D. Cooper, W. Polk, A. Regenscheid and M. Souppaya, "Nist Special Publication 800- 147: BIOS Protection Guidelines," NIST, 2011.

[24] K. Scarfone and A. Regenscheid, "NIST Special Publication 800-155: BIOS Integrity Measurements Guidelines (Draft)," NIST, 2011.

[25] M. Pirker, R. Toegl, D. M. Hein and P. Danner, "A PrivacyCA for Anonymity and Trust," in Proceedings of TRUST'09 - Trusted Computing, Second International Conference, Oxford, UK, 2009.

[26] E. Brickell, J. Camenisch and L. Chen, "Direct Anonymous Attestation," in Proceedings of the CCS'04 - the 11th ACM Conference on Computer and Communications Security, New York, NY, USA, 2004.

[27] TCG, "TCG Specification Architecture Overview Rev. 1.4," Trusted Computing Group, 2007. [28] TCG, Trusted Computing Group Protection Profile PC Client Specific Trusted Platform Module TPM

Family 1.2 Level 2 Revision 116, Trusted Computing Group, 2011. [29] J. Butterworth, C. Kallenberg, X. Kovah and A. Herzog, "BIOS chronomancy: fixing the core root of

trust for measurement," Proceedings of CCS'13 - ACM Conference on Computer and Communications Security, Berlin, Germany, 2013.

[30] J. A. Halderman, S. D. S. N. Heninger, W. Clarkson, W. Paul, J. A. Calandrino, A. J. Feldman, J. Appelbaum and E. W. Felten, "Lest We Remember: Cold Boot Attacks on Encryption Keys," in Proceedings of the 2008 USENIX Security Symposium, San Jose, CA, 2008.

[31] TCG, "TCG Platform Reset Attack Mitigation Specification," Trusted Computing Group, 2008. [32] D. Gawrock, H. Reimer, A.-R. Sadeghi and C. Vishik, "Offline dictionary attack on TCG TPM weak

authorisation data, and solution," in Proceedings of the First International Conference Future of Trust in Computing 2008, 2008.

[33] B. J. Parno, "Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers," Carnegie Mellon University, Pittsburgh, PA, 2010.

[34] B. Parno, "Bootstrapping Trust in a "Trusted" Platform," in Proceedings of the HOTSEC'08 - The 3rd Conference on Hot Topics in Security, Berkeley, CA, 2008.

[35] R. A. Fink, A. T. Sherman, A. O. Mitchell and D. C. Challener, "Catching the Cuckoo: Verifying TPM Proximity Using a Quote Timing Side-Channel," Proceedings of TRUST 2011 - The 4th International Conference , Pittsburgh, PA, 2011.

[36] U.S. Department of Defence, Instruction Number 8500.01, 2014. [37] TCG, TCG PC Client Specific Implementation Specification for Conventional BIOS Specification

Version 1.21 Errata, Trusted Computing Group, 2012. [38] TCG, TCG Software Stack (TSS) Specification Version 1.2 Level 1, Trusted Computing Group, 2006. [39] R. Toegl, T. Winkler, M. Nauman and T. W. Hong, "Specification and Standardization of a Java

Trusted Computing," Softw., Pract. Exper., vol. 42, no. 8, pp. 945-965, 2012. [40] C. Stüble and A. Zaerin, "μTSS – A Simplified Trusted Software Stack," in Proceedings of TRUST

2010 - The Third International Conference of Trust and Trustworthy Computing, Berlin, Germany, 2010.

[41] C. Latze and U. Ultes-Nitsche, "A Proof-of-Concept Implementation of EAP-TLS with TPM Support," in Proceedings of the ISSA 2008 Innovative Minds Conference, Johannesburg, South Africa,

38 FFI-rapport 2014/00920

2008. [42] S. Balfe, E. Gallery, C. J. Mitchell and K. G. Paterson, "Challenges for Trusted Computing," IEEE

Security & Privacy, vol. 6, no. 6, pp. 60-66, 2008. [43] E. M. Gallery and C. J. Mitchell, "Trusted computing: Security and applications," Cryptologia, vol.

33, pp. 217-245, 2009. [44] A.-R. Sadeghi and C. Stuble, "Property-based Attestation for Computing Platforms: Caring About

Properties, Not Mechanisms," in Proceedings of NSPW '04 - The 2004 ACM Workshop on New Security Paradigms, New York, NY, USA, 2004.

[45] V. Haldar, D. Chandra and M. Franz, "Semantic Remote Attestation: A Virtual Machine Directed Approach to Trusted Computing," in Proceedings of VM'04 - The 3rd Conference on Virtual Machine Research And Technology Symposium, San Jose, California, 2004.

[46] S. Berger, R. Càceres, K. A. Goldman, R. Perez, R. Sailer and L. van Doorn, "vTPM: Virtualizing the Trusted Platform Module," in Proceedings of the 15th Conference on USENIX Security Symposium, Berkeley, CA, USA, 2006.

[47] M. W. Murhammer, A Comparison between Smart Cards and Trusted Platform Modules in Business Scenarios (Master Thesis), Wien: Danube University Krems, Austria, 2006.

[48] TCG, "TPM MOBILE with Trusted Execution Environment for Comprehensive Mobile Device Security," Trusted Computing Group, 2012.

[49] A. Viswanathan and B. C. Neuman, "A survey of isolation techniques (Draft Copy)," University of Southern California , 2010.

[50] D. Grawrock, Dynamics of a Trusted Platform: A Building Block Approach, Intel Press, 2009. [51] AMD, "Secure Virtual Machine Architecture Reference Manual," AMD, 2005. [52] ARM, "ARM Security Technology: Building a Secure System using TrustZone® Technology," ARM

Limited, 2009. [53] TCG, "TCG D-RTM Architecture," Trusted Computing Group, 2013. [54] TCG, TCG PC Client Specific TPM Interface Specification (TIS) Ver. 1.3, Trusted Computing Group,

2013. [55] W. Futral and J. Greene, "Intel® Trusted Execution Technology for Server Platforms - A Guide to

More Secure Datacenters," Apress Open, 2013. [56] R. Wojtczuk and J. Rutkowska, "Following the White Rabbit: Software attacks against Intel(R) VT-d

technology," The Invisible Things Lab, 2011. [57] R. Wojtczuk and J. Rutkowska, "Attacking Intel® Trusted Execution Technology," in Black Hat DC,

2009. [58] R. Wojtczuk, J. Rutkowska and A. Tereshkin, "Another Way to Circumvent Intel® Trusted Execution

Technology," The Invisible Things Lab, 2009. [59] R. Wojtczuk and J. Rutkowska, "Attacking Intel TXT via SINIT code execution hijacking," The

Invisible Things Lab, 2011. [60] J. M. McCune, "Reducing the Trusted Computing Base for Applications on Commodity Systems. PhD

Thesis," Carnegie Mellon University, Pittsburgh, PA, 2009. [61] A. Vasudevan, S. Chaki, L. Jia, J. M. McCune, J. Newsome and A. Datta, "Design, Implementation

and Verification of an eXtensible and Modular Hypervisor Framework," in IEEE Symposium on Security and Privacy, Berkeley, CA, USA, 2013.

[62] J. M. McCune, Y. Li, N. Qu, Z. Zhou, A. Datta, V. D. Gligor and A. Perrig, "TrustVisor: Efficient TCB Reduction and Attestation," in IEEE Symposium on Security and Privacy, Berleley/Oakland, California, USA, 2010.

[63] T. Alves and D. Felton, "TrustZone: Integrated Hardware and Software Security - Enabling Trusted Computing in Embedded Systems," Information Quarterly, vol. 3, no. 4, pp. 18-24, 2004.

[64] A. Fongen and F. Mancini, "The Integration of Trusted Platform Modules into a Tactical Identity Management System," in Proceedings of teh Military Communications Conference MILCOM 2013, San Diego, CA, USA, 2013.

FFI-rapport 2014/00920 39

[65] A. Fongen and F. Mancini, "Identity Management and Integrity Protection in Publish-Subscribe systems," in Proceedings of IDMAN 2013 - The Third IFIP WG 11.6 Working Conference on Policies and Research in Identity Management, London. UK, 2013.

[66] A. Fongen and F. Mancini, "Attested Genuineness in Service Oriented Environments," in Proceedings of ICDIPC 2013 - The Third International Conference on Digital Information Processing and Communications , Dubai, UAE, 2013.

[67] A. Armando, M. Grasso, S. Oudkerk, S. Ranise and K. Wrona, "Content-based Information Protection and Release in NATO Operations," in Proceedings of SACMAT '13 - the 18th ACM Symposium on Access Control Models and Technologies, New York, NY, USA, 2013.

[68] S. Oudkerk and G. Lunt, "An Incremental Approach to Trusted Labelling In Support Of Cross-Domain Information Sharing," NC3A, The Hague, Netherlands, 2011.

[69] S. Oudkerk, I. Bryant, A. Eggen and R. Haakseth, "A Proposal for an XML Confidentiality Label Syntax and Binding of Metadata to Data Objects," in NATO RTO Symposium on Information Assurance and Cyber Defence, 2010.

[70] G. Lunt, S. Oudkerk and A. Ross, "NATO Metadata Binding Service," NCIA, The Hague, Netherlands, 2010.

[71] S. Oudkerk and K. Wrona, "Using NATO Labelling to Support Controlled Information Sharing between Partners," in Procedings of CRITIS 2013 - The 8th International Workshop on Critical Information Infrastructures Security, Amsterdam, The Netherlands, 2013.

[72] H. Löhr, A.-R. Sadeghi, C. Stüble, M. Weber and M. Winandy, "Modeling Trusted Computing Support in a Protection Profile for High Assurance Security Kernels," in Proceedings of TRUST 2009 - The 2nd International Conference of Trusted Computing, Oxford, UK, 2009.

[73] M. H. Kang, I. S. Moskowitz and S. Chincheck, "The Pump: A Decade of Covert Fun," in Proceedings of ACSAC'05 - The 21st Annual Computer Security Applications Conference , Tucson, AZ, US, 2005.