3 Building Chain of Trust – Trusted Computing

3Building Chain of Trust

Corrupting critical components of computer systems, tampering system running codes and modifying configuration files of computer systems have become the most popular attack methods used by hackers. These attacks change original trusted execution environment (TEE) of systems via modifying running code and critical configuration files, and then use this untrusted execution environment to launch attacks. So building TEEs for computer systems is a critical security problem in current computer security area.

To build a trusted execution environment, we first need to clarify the definition of trust. There are many different definitions for trust in trusted computing. The international standard ISO/IEC [95] defines trust as follows: “A trusted component, operation, or process is one whose behavior is predictable under almost any operating condition and which is highly resistant to subversion by application software, virus, and a given level of physical interference.” IEEE [96] defines trust as follows: “the ability to deliver service that can justifiably be trusted.” TCG defines trust as follows [3]: something is trusted “if it always behaves in the expected manner for the intended purpose.” All these definitions have one common point: they all emphasize the expectable behavior of one entity and the security and reliability of the system. TCG’s definition centers on the characteristic of entity’s behavior, and it is more suitable to describe users’ requirements for trust. This trust framework requires to research the method to implement trust. TCG gives a method to build trusted execution environment for computer systems: chain of trust.

Chain of trust based on trusted computing is a technology introduced in above background. By measuring every layer of computer systems and transferring trust between entities, TCG establishes trustworthiness from the hardware to the application layer and uses the security chip embedded in the platform to protect the measured data. In this way, chains of trust build TEE for computer systems. This kind of chains of trust not only provides TEE for users but also provides evidence of program’s running in TEE. It can also be combined with traditional network technology to extend trustworthiness to network environment.

In this chapter, we first introduce trust anchor: root of trust, including the root of trust for measurement (RTM), the root of trust for storage (RTS) and the root of trust for reporting (RTR). Next, wewill introduce principles of building chains of trust. Then we will introduce popular system of static chains of trust and dynamic chains of trust. Finally, we will introduce the method of building chains of trust for virtualization platforms.

3.1Root of Trust

3.1.1 Introduction of Root of Trust

Consider a common scenario where an entity called A launches an entity called B, and then B launches an entity called C. The user must first trust B, then he/she can trust C and so on. The user should finally trust entity A. To build such a chain of trust, we should perform as follows:

(1)Entity A launches entity B, then transfers control to entity B.

(2)Entity B launches entity C, then transfers control to entity C.

Then here the question arises: Who will launch entity A? As there is no entity that runs earlier than A, A is such an entity that must be trusted. To build a trustworthy chain of trust, entities such as A should be implemented by certified manufacturers, and their trustworthiness is guaranteed by the manufacturers. Entities like A are called root of trust (RoT) in chains of trust.

RoTs for platforms should satisfy the minimal functionality requirements that used to describe the platform’s trust. Generally, trusted computing platform has three kinds of RoTs: RTM, RTS and RTR. In the following, wewill first introduce RTMthat has the most close relationship with building chains of trust and then we will introduce RTS and RTR.

3.1.2Root of Trust for Measurement

During the development of trusted computing technology, two kinds of RTMs appear successively. The static RTM (SRTM) technology appears first. SRTM, the first entity that runs on the hardware since power on, is used to establish a chain of trust from the hardware to the OS even to the application, and the chain of trust established by static RTM is called static chain of trust. There is a new technology called dynamic RTM (DRTM), which can establish a chain of trust when the system is running. DRTM can establish a chain of trust whose TCB requires very small amount of hardware and software by invoking a special CPU instruction, and this kind of chain of trust is called dynamic chain of trust. In the following, we will introduce these two kinds of RTM.

3.1.2.1Static Root of Trust for Measurement

SRTM, which first controls the platform since the system powers on, is used to establish a chain of trust from the platform’s hardware to the upper application, and the SRTM plays the role of trust anchor in the chain of trust. Since SRTM must run first in the platform, it is usually implemented as the first running codes or the whole codes in the BIOS and also called as core root of trust for measurement (CRTM). There exist two kinds of CRTM in current PC architecture:

(1)CRTM is the first running code in the BIOS. In this kind of architecture, the BIOS consists of two independent block: BIOS boot block and post BIOS. The BIOS boot block plays the role of CRTM.

(2)CRTM is the whole BIOS. In this kind of architecture, the BIOS is indivisible, so the whole BIOS is the CRTM.

CRTM runs first since the platform powers on, and is responsible for measuring the following codes that run on the platform. If CRTM is the BIOS boot block, then it will first measure all the firmwares of the platform’s mainboard, and then transfer the control to the post BIOS, which will boot the following components. If the CRTM is the whole BIOS, it will directly measure the following components, such as bootloader, and then transfer the control to bootloader, which is responsible for establishing the following chain of trust.

3.1.2.2Dynamic Root of Trust for Measurement

SRTM suffers from several drawbacks, for example, it cannot establish a chain of trust dynamically in the running time and its trusted computing base (TCB) is too big. To deal with these drawbacks, the DRTM technology is proposed, and has been adopted by TPM 1.2 specifications. DRTM relies on a special CPU instruction, which can be triggered at any time when the platform runs, and then establish an isolated execution environment whose TCB contains only a few hardware and software codes. DRTM has an advantage of flexible trigger timing, and that’s why it is called dynamic RTM. The CPU giants, Intel and AMD, propose CPU architectures supporting DRTM, respectively, Trusted Execution Technology (TXT) and Secure Virtual Machine (SVM). In the following, we will introduce SVM and TXT.

AMD DRTM Technology. The SVM architecture proposed by AMD consists of virtualization and security extensions, and the security extension is mainly used for establishing isolated execution environment based on DRTM.

SVM DRTM technology uses the SKINIT instruction as the dynamic root of trust for measurement, which can be triggered at any time when the CPU runs. When the SKINIT instruction is triggered, it runs the Secure Loader Block (SLB16) code, which requires protection. Besides the flexibility of the trigger time, the AMD DRTM technology provides protection against DMA attacks for the SLB memory region: It provides an address register SL_DEV_BASE pointing to a continuous 64K memory space, and peripheral is prohibited to issue DMA access to this memory. However, the SVM DRTM technology doesn’t clear the sensitive data in the SLB, so the SLB should do this itself when it exits.

Figure 3.1: SKINIT timeline.

The SKINIT instruction takes the physical address as its own input operator, and it will build an isolated TEE when it is triggered as described in Figure 3.1:

(1)Re-initialize all the processors, then enter the 32-bit protected mode and disable the page table mechanism.

(2)Clear bits 15–0 of EAX to 0 (which is the SLB base address) enable the SL_DEV protection mechanism to protect SLB’s 64K bytes region of physical memory and prohibit any peripheral from accessing this memory using DMA.

(3)Multiple processors perform an inter-processor handshake (all other CPU processors are suspended).

(4)Read the SL image from memory and transmit it to the TPM, and ensure that the SL image cannot be corrupted by software.

(5)Signal the TPM to complete the hash and verify the signature. If any failures occur, the TPM will conclude that illegal SL code was executed.

(6)Clear the Global Interrupt Flag (GIF) to disable all interrupts, including NMI, SMI and INIT, and ensure that the execution of subsequent code cannot be interrupted.

(7)Set the ESP register to be the first address beyond the end of SLB (SLB base + 10000H), so that the data pushed onto the stack by SL will be at the top of SLB.

(8)Add the 16-bit entry point offset of SL to the SLB base address to form the SL entry point address, jump to it and execute it.

Intel TXT Technology. Intel proposed the TXT technology, which is similar to the SVM architecture in 2006. TXT technology consists of VT-x technology and Safer Mode Extensions (SMX), and SMX is mainly used to build the TEE. TXT technology involves the processors, chipset, IO bus, TPM and other components.

The SMX of TXT provides an instruction set called GETSEC, which can be used to build a TEE. The SENTER instruction of GETSEC set is used as the dynamic root of trust for measurement, and can be triggered at any time when CPU runs. Just like AMD SVM, TXT also provides DMA protection for the user codes running in the TEE (including the SINIT AC module17 and MLE code18), and currently TXT can isolate 3M memory address space for sensitive codes.

TXT also provides a mechanism called launch control policy (LCP) that can be used to check the platform’s hardware configuration. LCP mechanism is a part of the SINIT AC module, and is used to check whether the chipset and processor’s configuration meets current security policy. The LCP consists of three parts:

(1)LCP policy engine: Part of the SINIT ACM and enforces the policies stored on the platform.

(2)LCP policies: Stored in the TPM, they specify the policies that SINIT ACM will enforce.

(3)LCP policy data objects: Referenced by the policy structures in the TPM; each contains a list of valid policy elements, such as measurement values of MLEs or platform configurations.

Figure 3.2 describes the relationships between the LCP components. The LCP policy engine of the SINIT AC module reads the index of policy engine stored in TPM NV memory, decides which policy file will be used and checks whether the platform’s configuration and MLE satisfy the LCP policy. If yes, the LCP policy engine will transfer the control to the MLE.

Before building an isolated TEE for MLE, TXT first needs to load the SINIT AC module and MLE to memory and then trigger GETSEC instruction. GETSEC builds a secure environment for MLE by the following steps (for more details, please refer to Figure 3.3):

Figure 3.2: LCP components.
Figure 3.3: The process of building TEE using Intel TXT technology.

(1)The GETSEC[SENTER] instruction can only run in the Initial Logical Processor (ILP), and other processors are called Responding Logical Processor (RLP). ILP broadcasts GETSEC[SENTER] instruction messages to other processors in the platform. In response, other logical processors disable their interrupts (by setting interrupt mask), and inform ILP that they have disabled their interrupts, entered SENTER hibernate state and waited for joining the TEE build by ILP. By now, RLP can only waken up by WAKEUP instruction in the GETSEC set.

(2)After ILP receives the RLPs’ readiness signal, it loads, authenticates and executes the AC module. The AC module checks whether the configuration of the chipset and processors satisfy the security requirements, including the LCP check concerned above.

(3)AC module measures MLE and stores the measurement result to the TPM, and leverages the DMA protection mechanism to protect the memory region where MLE resides.

(4)After the execution of AC module, CPU launches the GETSEC[EXITAC] instruction to perform the initial code of MLE, which is the code first run in the isolated environment. The initial code of MLE builds an execution environment for the following code of MLE, and at this time it can trigger WAKEUP instruction to invite other RLPs to join in MLE.

(5)When MLE decides to destroy the secure execution environment it builds, ILPs can trigger SEXIT instruction to exit TXT.

3.1.3Root of Trust for Storage and Reporting

3.1.3.1Root of Trust for Storage

The TCG defines the RTS as a computation engine that maintains the integrity measurement value and sequence of the measurement value. RTS stores the measurement value in the log, and stores the hash value of the measurement to the PCR. Besides, RTS is responsible for protecting all the data and cryptographic keys delegated to the security chip. We will introduce RTS of security chips in this section.

In order to reduce the cost, security chips are usually equipped with a little volatile memory. However, security chips need to protect lots of cryptographic keys and delegated secure data. In order to ensure the normal usage of security chips, a special storage architecture is designed for RTS: A key cache management (KCM) module locates between the external storage device and RTS in the security chip. KCM is used to transfer cryptographic keys between the security chip and external storage device: It transfers the cryptographic keys that are not required any more or not activated outside of the security chip, and transfers the cryptographic key that will be used into the security chip. This kind of design not only reduces the memory resources of the security chip but also guarantees the normal usage of the RTS.

3.1.3.2Root of Trust for Reporting

The RTR is a computing engine that guarantees the integrity attestation function. RTR has two functions: First, it displays the integrity measurement values stored in the security chip; second, it proves to the remote platform its integrity measurement values based on the platform identity attestation. The integrity reporting function leverages the attestation identity key (AIK) to sign the PCRs that store the platform’s integrity measurement values. The remote verifier uses the signature to verify the platform’s state. The typical platform’s integrity measurement reporting protocol is as follows: The verifier requests for the platform’s configuration of the attester with a random nonce that is used to resist replay attacks and also for the PCR values stored in the security chip, and then the security chip signs the PCR values with the nonce using the AIK; the attester transfers the signature to the verifier, who will verify the signature using the public key of the verifier’s AIK to check the integrity state of the attester.

3.2Chain of Trust

3.2.1 The Proposal of Chain of Trust

The Internet is the main way to communicate, while it is also the main way for the malicious codes such as Trojan horse and virus to spread. All the malicious codes come from some terminal, so protecting the security of the terminal is always a focused research point in the field of computer security. Currently, there are many technologies that protect the computer system, such as antivirus tools, intrusion detection systems and system access control. Although these technologies can improve the security of the system, all of them use the principle of putting patches on the system, which cannot improve the system’s security thoroughly. In this background, trusted computing proposes the technology of chain of trust, which can ensure that all the codes running on the system are trustworthy by measuring the running components and the transitive trust. So the chain of trust can solve the security problems of computer system from the source.

Trusted computing technology gives us a way to build chains of trust by embedding the RTM in the computer system. We transfer the trust from the RTM to the whole computer system by measuring and verifying hardware/software layers of the system one by one, and thus guarantee that the whole system is trustworthy. In the earlier works, the representative measurement system is the Copliot system [8] proposed by University of Maryland and the Pioneer system [9] proposed by CMU Cylab. These two systems perform the measurement leveraging a customized PCI card and an external trusted entity, respectively. The disadvantage of the two systems is that they cannot be deployed on general terminal platforms. Taking the TPM as the root of trust, TCG proposes a way of building chain of trust for general terminal platforms by measuring the hardware, operating system and applications step by step.

Chains of trust are categorized as the static chain of trust and the dynamic chain of trust by the kind of RTM they are based on. The static chain of trust takes the static RTM as the RoT, and can ensure the whole platform’s trustworthiness. The static chain of trust was proposed in early days and now it is a mature technology. TCG specifies its procedure in some specifications. With the development of technology, the dynamic chain of trust has been proposed. It can establish a dynamic chain of trust at any system running time by leveraging the flexibility of the dynamic RTM.

3.2.2Categories of Chain of Trust

3.2.2.1Static Chain of Trust

The static chain of trust starts from the SRTM and establishes a chain of trust from the platform’s hardware to the applications by measuring and verifying the hardware/software layers one by one. It transfers the trust from the RTM to the code of applications, and thus guarantees the trustworthiness of the whole platform. The static chain of trust is based on two technologies: integrity measurement and transitive trust. We first introduce these two technologies and then introduce the procedure of establishing the static chain of trust specified in TCG’s specifications.

Basic Principles of the Static Chain of Trust. TCG calls the measurement of an entity performed by one trusted entity as measurement events. A measurement event refers to two classes of data: (1) data to be measured – a representation of codes or data to be measured and (2) measurement digests – a hash of those data to be measured. Entity that is responsible for performing measurement obtains the measurement digest by hashing the data to be measured, and the measurement digest is a snapshot and the integrity mark of the data to be measured.

Measurement digest marks the integrity information of the data to be measured, and the integrity report also requires the measurement digest; so the measurement digest needs to be protected, which is done by the RTS of security chip. The data to be measured do not require to be protected by security chip, but should be remeasured during the integrity attestation. So the computation platform needs to store the data.

TCG uses the Storage Measurement Log (SML) to store the list of software involved in the static chain of trust, and SML mainly stores the data to be measured and measurement digests. TCG does not define data encoding rules for SML contents but usually recommends following appropriate standards such as Extensible Markup Language (XML).

The TPM uses a set of registers, called Platform Configuration Registers (PCR), to store measurement digests. TPM provides an operation called Extend for PCRs. Updates to a PCR follows as PCR[n] = SHA1 (PCR[n] || data to be measured). The Extend operation produces a 160-bits hash value, which can be used as the measurement digest of the measured software, and the later measured data will be extended based on the old PCR value and will produce a new PCR value after the Extend operation. In this way, PCR records the extended data list. For example, PCR[i] is extended by a list data of m1, ..., mi, and finally PCR[i]= SHA1(..SHA1(SHA1(0||m1)||m2). . . ||mi), and the digest in PCR[i] represents the execution sequence of m1,. . . , mi.

Transitive Trust. Transitive trust follows the method below: First measure and then verify and finally execute (denoted by measure-verify-execute method). From the RTM, every running component should first measure the next components and then check its integrity according to its measurement digest. If the integrity check passes, the running component will transfer the execution control to the measured component; else, the chain of trust will abort as it indicates that the measured component is not the target we expect. By this process, we can transfer trust from the RTM to the application layer. Figure 3.4 describes the procedure how the trust is transferred from the static RTM to the upper application layer.

Establishment of the Static Chain of Trust. Conventional computer system consists of hardware, bootloader, operating system and applications, and boots as follows: When the system is powered on, BIOS first runs the Power On Self Test (POST) procedure and then invokes the INT 19H interrupt to launch the subsequent program (normally the bootloader stored in the MBR of hard disk) according to the boot sequence set by BIOS, and then the bootloader launches the operating system and finally the applications.

By the idea of transitive trust, TCG defines the procedure of how to establish the chain of trust from RTM to the bootloader:

(1)After the system is powered on, CRTM extends itself and POST BIOS (if exists) and motherboard’s firmware on the platform to PCR[0].

(2)After BIOS gets control, it extends PCRs in the following way: First, extend the configuration of platform motherboard and hardware components to PCR [95]; second, extend the optional ROM controlled by BIOS to PCR [96]; third, extend the configuration of optional ROM and related data to PCR [3]; fourth, extend the IPL, which reads MBR code and finds the loadable image from the MBR to PCR [97]; fifth, extend the configuration of IPL and other data used by IPL to PCR [98]. The details of PCR usage is listed in Table 3.1;

(3)Invoke INT 19H to transfer the execution control to MBR code (usually bootloader). At this time, the chain of trust is extended to bootloader.

Table 3.1: PCR usage.

PCR index PCR usage
0 Store measurement values of CRTM, BIOS and Platform Extensions
1 Store measurement values of platform configuration
2 Store measurement values of option ROM code
3 4 Store measurement values of option ROM configuration and data Store measurement values of IPL code (usually the MBR)
5 Store measurement values of IPL code configuration and data (used by the IPL code)
6 Store measurement values of state transition and wake event information
7 Reserved for future usage
Figure 3.4: The static chain of trust.

After the chain of trust is established to the bootloader, the bootloader and the OS must obey the measure-verify-execute method if they want to extend the chain of trust. The establishment of the whole static chain of trust is described in Figure 3.4.

3.2.2.2Dynamic Chain of Trust

The static chain of trust introduced earlier takes SRTM as RoT, and can only be establishedwhen the system is powered on. This inflexibility brings inconvenience to users. To deal with this problem, AMD and Intel propose CPU security extensions supporting special instructions that can be used as DRTM. Combined with TPM version 1.2, these instructions can establish dynamic chain of trust based on DRTM. First, this kind of chain of trust is based on CPU’s special security instructions, and can be established at any time; second, the dynamic chain of trust greatly reduces TCB of the platform by making the chain of trust not rely on the whole platform system.

Because of its flexibility, the application scenario of dynamic chain of trust is not confined. Current dynamic chain of trust can not only provide functions like static chain of trust, such as trusted boot and TEE building for general computation platforms and virtual machine platforms, but can also be used to build chain of trust for any code running on system.

Although dynamic chains of trust can be applied in many kinds of scenarios, the establishment procedure is alike. Except for some slight differences in technical details, the DRTM technologies provided by AMD and Intel have the same principle. In the following, taking the virtual machine platforms based on hypervisor as an example, we will describe how to build a dynamic chain of trust for a piece of code (which is called SL in SVM architecture and MLE in TXT technology). The details are depicted in Figure 3.5.

(1)Load the code of hypervisor and check-code for the platform (such as AC module in TXT technology).

(2)Invoke security instruction, which does the following work:

(a)Initialize all processors in the platform.

(b)Disable interrupts.

(c)Perform DMA protection for hypervisor code.

(d)Reset PCR 17 to 20.

Figure 3.5: The establishment process of dynamic chain of trust.

(3)The main processor loads the check-code for the platform, guarantees its legality by checking its signature and then extends this code to PCR 17.

(4)Run the check-code for the platform to ensure that platform’s hardware satisfies the security requirements and then measure hypervisor and extend it to PCR 18.

(5)Run hypervisor in the isolated TEE and wake other processors to join in this isolated environment according to its requirement.

Please note that when the security instruction is invoked, PCR 17 to 20 will be reset, and the reset value is different from the initial value set when the system is powered on. The difference makes the remote verifier believe that the system indeed enters into the isolated TEE established by DRTM. To guarantee the correctness of remote attestation, system designers must guarantee that only the security instruction can reset dynamic PCRs. To achieve this goal, TPM version 1.2 specifications present the locality mechanism, aiming to deploy access control when platform components access TPM resources (such as PCRs). There are six localities defined (Localities 0–4 and Locality None). The following text describes each locality and its associated components:

Locality 4: Trusted hardware components. It is used by the DRTM to establish the dynamic chain of trust.

Locality 3: Auxiliary components. This is an optional locality. If used, it is defined upon implementation.

Locality 2: Runtime environment of OS launched dynamically (dynamic OS).

Locality 1: Environment used by the dynamic OS.

Locality 0: The Static RTM, dynamic chain of trust and environment for SRTM.

Locality None: This locality is defined for TPM version 1.1 and is used for downward compatibility.

To adapt DRTM, TPM v1.2 adds 8 PCRs compared to TPM v1.1, and the added PCRs are PCR 16 to 23. These PCRs are also called dynamic PCRs, and PCR 0 to 15 defined in TPM v1.1 are called static PCRs. TPM v1.2 also adds PCR reset (pcrReset), locality reset (pcrResetLocal), locality extend (pcrExtendLocal) attributes for DRTM technology, and the locality mechanism supports usage control on these dynamic PCRs. Table 3.2 lists PCR attributes and the usage control on PCR of each locality: 0 in the table indicates that the PCR has no such attribute and 1 indicates that the PCR has such an attribute. The 0/1 value in the locality reset and locality extend columns indicates whether the components with localities 0–4 have this attribute.

Locality level represents the physical binding relationship between the platform’s components and TPM. For more details, please refer to Table 3.3. For example, locality 4 is bound to DRTM, so only DRTM possesses the control of TPM resources, particularly the control of PCR.

Table 3.2: PCR attributes.

Table 3.3: Locality usage.

3.2.3Comparisons between Chains of Trust

The chain of trust aims to build isolated TEE from the root of trust. Table 3.4 gives a detailed comparison of static chain of trust and dynamic chain of trust from the aspects of hardware requirements, time of establishment, TCB, hardware protection, development difficulty, user experience and so on.

Table 3.4 shows that the static chain of trust has lower requirements on hardware, so we can establish the chain of trust for the whole system by adding some corresponding functionalities that are transparent to users and have less effect on user experience. However, due to the effect of CRTM, the static chain of trust can only be established at the time of system start, so we need to reboot the platform if we want to reestablish the chain of trust. In addition, the static chain of trust can only start from the hardware level of the platform and extend to the application level one by one, so its TCB contains the whole system. Meanwhile, the bigger the TCB, the more possibilities of security problems, and the lower the security level of the system. Kauer [98] overviews all kinds of attacks on static chain of trust, including TPM reset attack, BIOS replacement attack and attacks exploiting bugs in the code. TPM reset attack finds a way that resets TPM v1.1 without resetting platform and leverages it to break the static chain of trust. Current BIOS is stored on the motherboard’s flash, which can be replaced. As the foundation of static chain of trust, the CRTM stored in the BIOS can be replaced. This makes the static chain of trust not to be trustworthy. The TCB of the static chain of trust is big, and Kauer [98] finds a bug in the bootloader, which can be used to attack the static chain of trust.

Table 3.4: Comparisons between static chain of trust and dynamic chain of trust.

Static chain of trust Dynamic chain of trust
Hardware requirements General PC architecture equipped with security chips Security chip and the CPU need to support security instructions
Time of establishment TCB Only at the time of system start The whole computer system At any time Small amount of hardware and software
Hardware protection No DMA protection for isolated environment
Development difficulty Easy, don’t require special program Difficult to develop, program must be self-contained
User experience Have little effect on user experience Have poor user experience, can only run isolated code
Known attacks TPM reset attack, BIOS replacement attack, TCB bug attack None

Compared to the static chain of trust, the dynamic chain of trust has its advantages in the aspects of establishment time, size of TCB and protection on the isolated environment. However, this new technology is not mature on usage and user experience. First, it is difficult to use, because all the codes should be self-contained as the execution environment built by DRTM is totally isolated from the system. For the code that has dependence on the other libraries, it needs to build a running environment itself, which increases the difficulty of development. Second, DRTM might bring poor experiences for users, because the isolated environment provided by DRTM only runs sensitive code requiring protection, and all the other programs will be suspended.

3.3Systems Based on Static Chain of Trust

The establishment of the static chain of trust consists of phases of BIOS, bootloader and Operating System. All the phases of static chain of trust have the same principle: measure and extend the next running code to the corresponding PCR of security chips after getting execution control. The phase of BIOS is based on the CRTM stored on the computers’ motherboard, and the TCG PC specification gives a detailed description of the establishment of the chain of trust in this phase. Computer vendors adopt this specification to build the chain of trust for their products.

The chain of trust in bootloader layer aims to check the security of the bootloader based on the trusted BIOS, and builds trusted execution environment for the OS. There are lots of research on chain of trust of this layer, such as the open-source project Trusted GRUB [99] and IBM’s Tpod System [100]. According to the procedure of establishing a chain of trust defined by TCG’s specifications, these trusted boot systems establish chains of trust by the principle of measuring first and then executing based on the open-source bootloader GRUB. Besides the basic functionalities of chains of trust, Trusted GRUB provides GRUB commands with the extension of trusted computing functionalities to facilitate users. The Institute of Software Chinese Academy of Sciences (ISCAS) proposes a trusted bootloader subsystem supporting OS recovery. The subsystem checks the measurement values of the OS kernel and important configuration files before launching the OS. If the check shows that they do not meet the integrity requirement, the system will enter into a recovery subsystem, which can repair the OS by recovering the tampered files.

The chain of trust in operating system layer is somewhat complex. Modern operating systems provide many kinds of services and applications, which run without fixed order. This brings a great challenge for extending the chain of trust to OS layer or application layer. To confront this challenge, IBM T. J. Watson research center proposes the IMA system [10] and MicroSoft research center proposes Palladium/NGSCB [101] system. IMA and Palladium/NGSCB adopt the principle that all the executable components loaded to the system are measured before execution. This principle follows the idea of the establishment of chains of trust that performing measurement before execution guarantees the integrity status of the program when it is loaded to the system. The researchers of IMA then propose a successive scheme that combines the measurement and mandatory access control and develops a prototype called PRIMA [11]. PRIMA measures the components that have relationship with mandatory access control model and uses mandatory access control to ensure the integrity of critical components’ information flows. This kind of design reduces overload of measurement and guarantees the integrity of the system at running time. We have designed a chain of trust by combining measurement at loading time and measurement at running time, which supports two kinds of measurement methods: components measurement and dynamic measurement. Our chain of trust has the following advantages: first, we refine our measurement granularity to system components level; second, we can measure the running program in the system on time, which guarantees the integrity of the system at running time.

We will introduce principles and technologies used to establish chains of trust to bootloader and OS, respectively, and take our chain of trust as an example to elaborate the system and method to establish the static chain of trust from hardware to operating system, to applications and finally to the network access.

3.3.1Chain of Trust at Bootloader

The chain of trust at bootloader is a kind of chain of trust taking TPM/TCM as the root of trust. It starts from the hardware layer of the system and then extends the chain of trust to BIOS, bootloader, which builds trusted execution environment for the operating system. It is the first phase and foundation for the computer system to establish a whole chain of trust.

As the hardware and BIOS have strong tamper-resistance ability, most researches on trusted boot focus on the bootloader layer, which is much more vulnerable. Establishing the chain of trust in bootloader layer requires checking the security of the code, configuration and related files of the bootloader, and the most typical trusted boot system is Trusted GRUB. To ensure the security of execution environment before OS starts, Trusted GRUB takes advantage of TPM measurement mechanism to check every stage of GRUB, the configuration of GRUB and the OS kernel image and extends the measurement values of the software to PCRs. The main steps that Trusted GRUB performs to build a chain of trust are as follows:

(1)When the system runs to the BIOS phase, BIOS measures the MBR in the hard disk, that is, the code of GRUB Stage1, stores the measurement values to the PCR 4 of TPM and then loads and runs Stage1.

(2)GRUB Stage1 loads and measures the first sector of GRUB Stage1.5, extends the measurement values to PCR 4 and then executes Stage1.5.

(3)After getting execution control, GRUB Stage1.5 loads and measures GRUB Stage2, extends GRUB Stage2 to PCR4 and then transfers the execution control to GRUB Stage2. At this time, the bootloader has finished starting up, and can load the OS to memory.

(4)GRUB measures its configuration file grub.conf, extends the measurement values to PCR 5, measures the OS kernel and checks its integrity.

Trusted boot ensures the security of the OS loader, and can prevent attackers from injecting malicious code before OS running, which sets up the security foundation for OS boot. All trusted boot systems use the similar method to build chain of trust, and their main goal is to guarantee the integrity of the bootloader, configuration files of bootloader and the OS kernel image.

3.3.2Chain of Trust in OS

Chains of trust in OS layer aim to provide trusted execution environment for applications and platform integrity attestation services for remote verifiers. After bootloader transfers execution control to OS kernel, the chain of trust in this layer should protect the security of all kinds of executable programs, such as kernel modules, OS services and applications, which might affect the integrity of the whole system. There are many kinds of methods to establish a chain of trust for OS: From the viewpoint of the time performing measurement, some systems perform measurement when software is loaded into the system to guarantee the system loading-time integrity, and some systems perform measurement while software is running to guarantee the system running-time integrity; from the viewpoint of granularity of measured software, some systems measure the whole OS kernel image and some systems measure components running in the OS one by one, such as kernel modules, important configuration files and critical data structures of OS; from the viewpoint of security policies, some systems perform measurement following the mandatory access control policy deployed in the system to ensure that mandatory access control security policies are correctly enforced. The IMA and NGSCB systems start from the point of time and strictly follow the principle of performing measurement when the executable software is loaded, and can ensure the integrity of the software before it gets execution control. However, this kind of chain of trust has some potential security risks especially when modern operating systems are more and more complex and cannot guarantee program’s running-time integrity. The PRIMA system starts from the point of mandatory access control policy and proposes the method of guaranteeing the running-time integrity of information flows. We design a chain of trust from the point of measurement granularity and measurement time. By proposing technologies of component measurement and dynamic measurement, our system not only refines the granularity of measured components but can also guarantee the running-time integrity of the system. In the following, we will first introduce methods of establishing chains of trust based on loading-time measurement and information flow control, respectively, and then introduce our chain of trust.

3.3.2.1Establishing Chain of Trust by Loading-Time Measurement

The chain of trust at OS layer aims to extend the chain of trust from trusted bootloader to OS, and to applications. To ensure the security of the chain of trust, the measurement agent is usually implemented in the OS kernel. As the trusted bootloader has checked the integrity of the OS kernel, the measurement agent in the OS kernel is assumed to be trustworthy. After the bootloader decompresses the kernel image, according to the execution flow of OS, the OS measurement agent measures the OS modules and kernel services when they are loaded into the kernel by invoking TPM, so that the OS chain of trust subsystem is established. After the OS starts, to ensure the security of running applications, every loaded program should be measured by the OS measurement agent before it executes. This kind of chain of trust can combine the black/white-list mechanism for applications to enhance the runtime security of the system.

The necessarily considered issues when designing and implementing the chain of trust in OS layer are: measurement content, the measurement timing and the storage of the measurement data. As any program loaded to the OS kernel might contain security vulnerabilities that can be exploited by attackers to break the integrity of the system, the chain of trust measures all the executable programs loaded to the kernel, including the applications, dynamic link libraries, static link libraries and even the executable shell scripts. When they are loaded into the kernel, we leverage the hook functions to obtain the content and then perform the measurement, and finally store the measurement values in the OS kernel and protect them using the security chip. In this section, taking the Linux as example, we will describe the principle and method of establishing a chain of trust based on loading-time measurement.

Architecture. Figure 3.6 depicts the architecture of the chain of trust in OS layer, whose core modules consist of measurement agent, attestation service and measurement lists for storing measurement values. All of these modules reside in OS kernel. The measurement agent runs first after OS kernel is decompressed, measures all the executable programs loaded into the OS, extends the measurement values into corresponding PCRs and stores the measurement log in the kernel’s measurement lists. Measurement lists store the measurement values measured during the OS running, and these measurement values are actually logs that record the sequence of software extended to PCRs. These measurement values play an important role in integrity verification for platform. The attestation service is used to prove the security of the OS chain of trust subsystem, and usually adopts the binary attestation scheme defined by TCG.

Implementation. The core module of the system of chain of trust in OS layer is the measurement agent. For Linux, the measurement agent is a Linux Security Module (LSM), and for Windows it is a virtual kernel driver module whose launch privilege is relatively high. The measurement agent needs to measure OS-related programs and files and the executable programs in application layer when they are loaded into the kernel, and the measurement contents include kernel modules, executable programs in user space, dynamic link libraries and executable scripts.

Figure 3.6: Architecture of chain of trust at operating system layer.

(1)Kernel modules: These are kernel extensions that can be loaded into the kernel dynamically. There are two ways of loading kernel module: the explicitly loading and the implicitly loading. The explicitly loading way needs users to load Linux kernel modules actively using the insmod or modprobe commands; and the implicitly loading way requires the kernel itself to automatically invoke modprobe to load the required kernel module in the user processes context. In Linux version 2.6, both ways will invoke the load_module system call to inform the kernel that the kernel modules have been loaded successfully.

(2)Executable programs in user space: All executable programs in user space are executed through the execve system call, which resolves the binary codes of executable programs and then invokes the file_mmap hook function to map the codes into the memory space. Measurement agent performs the measurement of the executable programs during the mapping. Finally, OS creates context for user process, and then jumps to the main function to execute the user process.

(3)Dynamic link libraries: Dynamic link libraries are shared code libraries that are required by system programs and user applications. When executable programs run, OS will load the necessary dynamic link libraries. Dynamic link libraries are also loaded into the OS through the file_mmap hook function, so that the measurement method of dynamic link libraries is the same with applications in user space.

(4)Executable scripts: Running executable scripts depends on script interpreter (such as bash), which is measured as an executable program by the OS when it is loaded by OS. Executable scripts might have a great effect on the integrity of the system, so measuring executable scripts is an important part in establishing the chain of trust in OS layer.

Usually the integrity measurement work of the measurement agent is finished after programs requiring to be measured are loaded. For user applications and dynamic link libraries, the measurement agent performs measurement when they are mapped into the memory space (namely, in the file_mmap hook function); for kernel modules, the measurement agent performs measurement in the load_module procedure invoked by the modules after they are loaded into the kernel successfully; script interpreter (such as bash) itself is measured as an executable program, and measuring executable scripts requires modifying the interpreter, so that it can invoke the security chip to perform integrity measurement when the executable scripts are loaded. To prevent frequent measurements from affecting the system’s performance, the chain of trust adopts the measurement cache mechanism, which records all the measured files when performing measurement for the first time. When new measurement request occurs, the measurement agent measures only the programs that have never been measured or only the programs that have been modified. The measurement cache mechanism can greatly improve the performance of the chain of trust system without reducing the security level of the system.

The above chain of trust follows TCG’s static measurement idea and establishes a chain of trust from the trusted bootloader to OS and to applications, which provides trusted execution environment for applications. This kind of chain of trust leverages hook functions of OS to implement the measurement of kernel modules, dynamic link libraries, static link libraries, applications and executable scripts when they are loaded, and in this way it almost measures all the executable programs that can affect the execution environment of OS, so this kind of chain of trust to some extent can detect loading-time attacks on all programs. Another feature of this kind of trust is that it requires very small modification on OS, especially for Linux: It only needs to have a little modification on the source codes of Linux and then can achieve a relatively high security level.

As it does not have protection mechanism at runtime, the chain of trust above cannot guarantee the security of the execution environment when system is running, and cannot prevent attacks at runtime, which makes the system vulnerable in practice. Besides, this kind of chain of trust measures all the programs loaded into the system, including some programs having no effect on system’s security, so that it brings some unnecessary overload on the system.

3.3.2.2Chains of Trust Based on Information Flows

Loading-time OS measurement has two issues: runtime integrity is not concerned and the measured content is too large. To solve these two issues, research institutions propose the method of establishing chains of trust based on information flows [11]. This kind of chain of trust is based on mandatory access control models, and it measures the integrity of information flows to ensure that the system meets the preset mandatory access control policy. Another good feature of this kind of chain of trust is that it only measures the components that have effect on the system’s information flow and overcomes the disadvantage of loading-time measurement system.

Chain of trust based on information flows requires that the OS kernel supports mandatory access control function (such as SELinux). Its main idea is to leverage the mandatory access control model to ensure the integrity of the OS’s information flows and also to leverage the measurement function to ensure that the mandatory access control model is correctly enforced. The mandatory access control model labels the subjects and objects that affect the information flows of OS and defines policies that restrict the information flow between subjects and objects. In this way, mandatory access control model ensures that the information between subjects and objects can flow only as the policies have defined them. However, current mandatory access control models prohibit all information flows from entities with low integrity level to entities with high integrity level, which makes them not applicable to practical systems. So chains of trust adopt mandatory access control models like CW-Lite [102], which allow receiving data from entities with low integrity level by adding interfaces filtering data with low integrity level. The interfaces allow only data that do not affect the execution environment of the system passing through. The measurement agent measures two aspects of data: (1) policy files of the mandatory access control models; (2) the subjects and objects defined in the policy files. The measurement agent makes a decision about the executable processes loaded into the OS; if they belong to subjects and objects defined in the policy files, then measure them, otherwise, ignore them.

The integrity verification of this kind of chains of trust follows the idea of checking whether the mandatory access control models are correctly enforced. First, check whether the subjects, objects and policy files satisfy the integrity requirement, then check whether information flows between subjects and objects meet the rules defined in the policy files. If all checks pass, it can be concluded that the information flows at runtime meet the integrity requirements.

Chains of trust based on information flow control implement the integrity measurement and control of information flows of the system at runtime. It measures the subjects and objects that affect the system’s integrity according to the mandatory access control policy and restricts the access behaviors of subjects and objects by measurement results. This kind of chains of trust ignores the subjects that do not affect the system’s integrity. In one word, by combining the mandatory access control mechanism and chains of trust, we can greatly reduce the measurement content and improve the performance of establishing the chain of trust.

3.3.3The ISCAS Chain of Trust

After researching the methods used to build static chain of trust, we design and implement the ISCAS chain of trust system, which includes trusted boot, OS measurement, application measurement and the trusted network connection (TNC). The ISCAS chain of trust system consists of the trusted boot subsystem, the OS chain of trust subsystem, the TNC subsystem and realizes the trust establishment of the platform initialization environment, trusted execution environment and the network access environment. Based on the trusted BIOS, the trusted boot subsystem establishes a chain of trust with BIOS and bootloader by measuring and checking bootloader and corresponding configuration files. Based on the trusted boot subsystem, the OS chain of trust subsystem extends trust to the OS layer and application layer by leveraging the dynamic measurement method and component measurement method. These two methods can not only implement the chain of trust to OS at loading time but also implement the dynamic establishment of chain of trust for applications and their dependent programs.

Figure 3.7 depicts the whole architecture of the ISCAS chain of trust system, which consists of trusted boot subsystem, the OS chain of trust subsystem and the TNC subsystem. The trusted boot subsystem establishes a chain of trust following TCG’s specifications: First, it measures configuration files of bootloader and OS kernel files; second, it extends the measurement results to the security chip; third, it transfers the execution control to OS kernel. The OS chain of trust subsystem establishes a chain of trust to the OS layer and application layer by leveraging the component measurement and dynamic measurement. The component measurement refines the granularity of measured data to components of the OS and uses hook functions to measure them when they are loaded into the OS. Dynamic measurement can measure programs in real time according to user’s measurement request, which ensures the security status of the system during its runtime. These two kinds of measurement methods establish the chain of trust in the OS layer. Based on ISCAS chain of trust system, the platform can provide attestation services, which can prove the integrity status of the platform to remote verifiers, and can also provide system’s integrity information for network services such as the TNC, which extends the trust of the system to the network. In the following sections, we will introduce the technical implementation of trusted boot subsystem, the OS chain of trust subsystem and TNC subsystem.

Figure 3.7: The architecture of ISCAS chain of trust system.

3.3.3.1Trusted Boot Subsystem

The trusted boot subsystem starts from power on of computer and establishes a chain of trust from trusted BIOS to bootloader based on current boot sequence. This subsystem follows TCG’s specifications on the procedure of establishing a chain of trust and usage of security chips and builds a trustworthy execution environment for OS. The detailed boot sequence of the trusted boot subsystem is depicted in Figure 3.8. This subsystem also provides the demonstration functionality, which can show the trusted boot sequence as an experiment platform.

Figure 3.8: The boot sequence of the trusted boot subsystem.

Besides the basic function of establishing a chain of trust, the trusted boot subsystem provides the functions of report and verification of chain of trust, integrity report and verification of files, configuration of trusted boot, kernel repair and control of the secure boot.

(1)Report and verification of chain of trust: Report the measurement results of trusted boot and verify this trusted boot.

(2)Report and verification of files: Before the start-up of OS, display the measurement and verification results of important files during the establishment of the chain of trust.

(3)Configuration of trusted boot: Configure the parameters of trusted boot, for example, system files needed to be measured during boot phase, and the choice of trusted boot or secure boot.

(4)Kernel repair: Guarantee that the system enters the repair system to repair the OS kernel files that do not meet the integrity requirements if the measurement results of OS kernel and configuration files do not meet the integrity requirement.

(5)Control of secure boot: Provide GRUB commands with trusted computing functions, and users can control the boot sequence of the system.

Trusted boot subsystem is compatible with the open-source project GRUB. Its main feature is that by introducing the security chip, it ensures that the booted system is trustworthy. First, before the start-up of the OS, every boot stage and important boot files can be reported and verified in the trusted boot subsystem. Second, the OS kernel and important system files must be measured and verified before being loaded, which guarantees the trustworthy of the OS start-up. The trusted boot subsystem can not only ensure the normal boot of OS but also repair the OS using the kernel repair function when important files of OS are abnormal, for example, the kernel is corrupted or broken.

3.3.3.2The OS Chain of Trust Subsystem

The OS chain of trust subsystem is based on the trusted boot subsystem, extends the chain of trust from trusted bootloader to application by leveraging the OS measurement technology and provides trusted execution environment for user applications. It also provides attestation services through which verifiers can obtain the integrity verification proof. On the aspect of measurement, the OS chain of trust subsystem ensures the system’s integrity status when the OS is loaded into the memory by measuring all executable programs loaded into the OS, such as kernel modules, dynamic link libraries and user applications. Moreover, it also measures the processes and kernel modules in the OS to ensure the integrity status of the system at runtime. The attestation service not only provides platform’s integrity data for remote verifiers but also supports the TNC to build trusted network environment.

By combing the trusted boot subsystem, the OS chain of trust subsystem supports the establishment of a whole chain of trust for trusted computation platform and also Windows and Linux. As Windows is not open source, the chain of trust only guarantees the integrity of the whole kernel image and the processes running on the OS. For Linux, the OS chain of trust subsystem provides measurement with finer granularity: Besides measuring the whole OS image, it measures critical data structures and components. The measurement technology that combines component measurement and dynamic measurement can resist loading-time attacks on programs and can measure running programs of the system in real time, which can overcome the issues of TCG’s static measurement.

The OS chain of trust subsystem builds trust for the platform by leveraging the OS measurement technology, which can enhance the security of the system, and can be used to implement trustworthy OS. This kind of chain of trust also provides system attestation service of proving system’s security status, which proves the integrity status of the system to remote platforms, and can be used to establish trusted channel based on platform’s status. This kind of chain of trust can also provide trusted network authentication service based on user and platform identities, which guarantees that all the endpoints connected to the network satisfy the specific security policy, and by this way we can extend the trust to the whole network.

Component Measurement. A system usually consists of some interrelated program codes, and we define an executable program of the system and its directly dependent codes as a component. The component measurement measures the programs and their directly dependent codes. When components are loaded into the OS, the measurement agent measures the executable program and its directly dependent codes (such as static and dynamic link libraries). We implement the component measurement by adding measurement functions on the path where the executable program and its dependent codes are loaded and the measurement functions measure the integrity of executable program and its directly dependent codes. This kind of measurement ensures the integrity of components in the application layer when they are loaded and can also ensure the integrity of their dependent program codes. So it has wider measurement scope and finer measurement granularity than loading-time measurement.

The ISCAS chain of trust system implements the integrity measurement of OS components and application components. When these components are loaded, they will trigger the measurement functions embedded into the system hook functions, which perform integrity measurement on them. The steps of the component measurement are as follows:

(1)When executable programs are loaded into the OS or the measurement agent receives users’ measurement requests, the measurement agent finds all of the directly dependent codes of this program, such as static and dynamic link libraries and kernel modules, and all of these programs make up a component.

(2)The measurement agent checks whether the programs contained in the component exist in the measurement log one by one. If a program does not exist in the log, it must be loaded into the OS for the first time, and the measurement agent will compute the integrity measurement value of the program and add it to the measurement log. For the programs that exist in the log, the measurement agent checks whether the dirty bit is set. If the dirty bit is not set, then it returns directly.

(3)If the dirty bit is set in step (2), the program must have been modified in the memory, and it should be re-measured. The measurement agent remeasures the program and adds it to the measurement log.

Dynamic Measurement. Components measurement can measure the dependent code that has relationship with the executable code, but it still performs measurement when the code is loaded, and thus cannot perform integrity verification after the code is loaded. For attacks happening when programs are running, such as self-modifying code attack, the component measurement cannot verify the integrity of the programs after they are loaded. To verify the integrity of programs after they are loaded, we propose a method for establishing a chain of trust that can measure the integrity status of programs at anytime (we call it dynamic measurement) and can verify the integrity of programs after they are loaded.

Dynamic chains of trust aim to check and report the integrity of running programs in real time and systems built by dynamic chains of trust must consider the following two factors: measurement objects and measurement timing. The scope and granularity of measurement objects are determined by system security goals and the general principle is that the measurement objects must contain programs capable of affecting the system’s runtime integrity, such as kernel modules, system services and related user processes. From the view of timing, the real-time measurement is the best, but this method will seriously affect the running performance of the system. So we require that the measurement agent can perform measurement at any time during the running of processes, which can prevent the integrity of processes from being tampered with a large probability.

(1)System architecture

The architecture of dynamic measurement contains three layers: hardware layer, kernel layer, and user application layer. The security chip is in the hardware layer, and the measurement agent is divided into two parts, which are in the kernel layer and the user application layer, respectively. The measurement agent in the user application layer receives measurement requests from users and then sends the requests to the measurement agent in the kernel layer, which measures processes and kernel modules running in the system and returns the measurement results to the agent in the user application layer. In order to ensure the continuity of the chains of trust, the measurement agent in the kernel layer must be measured by the trusted boot system. Figure 3.9 depicts the architecture of the ISCAS chains of trust system.

(2)Process measurement

Process measurement actually describes characteristic information of process integrity in the system. The Linux OS contains a data structure in the memory used to describe process’s state and parameters for every process and we can find this data structure for some specific process by searching the process link list. The data structure for process is depicted in Figure 3.9, and some important parameters are as follows:

start_code, end_code: the start and end linear addresses of process’s code segment;

start_data, end_data: the start and end linear addresses of process’s data segment;

arg_start, arg_end: the start and end linear addresses of arguments of commands in the stack.

Figure 3.9: Dynamic measurement architecture (left) and stack of process image (right).

The start_code labels the start address of the code segment. For attacks tampering the codes, they modify the content of this segment. So this is also the primary part for dynamic measurement to verify the process integrity. Arg_start is the start address of the process’s stack for command arguments. The security strength of the running process is closely related to command input arguments, and the dynamic measurement also requires verifying the integrity of this part. Start_data is the memory field for data generated by running process, but as the content of this field has little relationship with the security of the process, and the content is always changing in the process’s life cycle, we do not include this field in the scope of the dynamic measurement.

When the integrity of the process needs to be verified, the user requests the measurement agent to measure the current state of the process. Its measurement steps are as follows:

(a)The measurement agent resolves the process name (or process ID) in the request, and obtains the process handle by searching the process description data structure in the process link list, which is maintained by the kernel.

(b)The measurement agent obtains all process description information through the process handle, including code segment, data segment, argument segment, stack segment and so on, and then obtains the linear address of the measurement object and the physical address by address translation.

(c)The kernel measurement module of the measurement agent maps the physical address of the process to kernel process space of kernel measurement module.

(d)The measurement agent requests TPM/TCM chip to hash the process data and extends corresponding PCRs, and finally signs the integrity measurement results using TPM/TCM, which can be used to attest the integrity of the process.

(3)Kernel module measurement

In Linux, the dynamic measurement of kernel modules is similar to process measurement. Linux maintains a data structure called struct module for each kernel module. The communication between the kernel module and other modules can be achieved by accessing the struct module. The link of all the struct module of kernel module makes up a module-link list. The struct module describes address, size and information of the kernel module. The dynamic measurement of kernel module is achieved by measuring these critical data structures.

After the kernel measurement, module of the measurement agent receives the measurement request; if the request is for kernel module and the module has been loaded into Linux kernel, the measurement agent first resolves the kernel module name and then searches the kernel module in the module-link list maintained by kernel to obtain module handle. The kernel measurement agent obtains the address and size of the executable code in the module by the handle, and finally dynamically measures the kernel module according to the method of dynamic measurement on process.

3.3.3.3The Trusted Network Connection Subsystem

With the rapid development of Internet and cloud computing, terminal security has more and more influence on the security and trustworthiness of network computation environment. The trusted computing technology proposes the idea of extending the trustworthiness from terminals to network, and this idea has been one of the important ways to improve the network environment trustworthiness. TCG proposes the TNC technology by combining chains of trust for terminals and traditional network access technology. TNC requires the network access server to verify the integrity of the terminal before it accesses the network, and only terminals meeting the system integrity requirements specified by the access policy are allowed to access the network. TNC ensures that the terminal platforms in the network are trustworthy, and prevents malicious terminals from accessing the network, which prevents the spreading of malicious code in the network from the origin.

The TNC subsystem extends the chain of trust of terminal to the network. The TNC server gives access decisions based on the integrity status of terminal platforms, which is collected by component measurement and dynamic measurement on terminal, and the integrity status provides fine-grained and dynamic integrity evidence for the terminal to access the network. First, component measurement provides component-level integrity information, which makes the TNC access server assess the integrity of terminals more precisely based on access policy set by network administrator. Second, dynamic measurement provides the newest integrity measurement information of terminal platforms and to some extent overcomes the TOCTOU problem. It can also provide technical support for dynamic integrity verification of terminals after connecting the network, and further ensure the security of the network.

3.4Systems Based on Dynamic Chain of Trust

As the static chain of trust does not concern the runtime integrity status and the building procedure is complex, the industrial field proposes the technology of dynamic chain of trust based on DRTM. Compared with the static chain of trust, the dynamic chain of trust can be established at any time and is suitable for many simple and constrained computing environment. The dynamic chain of trust is established by triggering a special CPU instruction. Its security does not rely on trusted components such as hardware and BIOS that are needed when system is booted, so it can be used to solve some security problems of trusted boot system. The dynamic chain of trust can be established at any time after OS is launched. It is usually used for user-specific applications to establish a trusted execution environment.

Many research institutions proposed various schemes for establishing dynamic chain of trust based on DRTM technology. OSLO [98] performs a comprehensive security analysis on trusted boot systems based on static chain of trust, and points out some security risks due to some inherent defects and the big TCB of the static chain of trust. OSLO also reviews a variety of attacks on this kind of systems: TPM reset attacks, BIOS replacement attacks and attacks based on bugs of the trusted GRUB itself. The first two attacks are caused by the break of the chain of trust when the RTM and TPM starts asynchronously, and the third attack is due to bugs in the TCB. OSLO proposes the idea of solving above problems based on dynamic chain of trust, which uses security extensions of AMD CPU to transfer the RoT from the BIOS to DRTM and also TPM’s dynamic PCRs to store measurement results. In this way, OSLO solves the problem of breaking static chain of trust. OSLO also prevents attacks caused by bugs in the TCB by removing the BIOS and bootloader from TCB. Intel develops a similar system called tboot, which uses Intel’s TXT technology as DRTM to establish the trusted boot, and its principles are similar to those of OSLO.

The typical system based on dynamic chain of trust in the OS layer is Flicker [103] designed by CMU Cylab, which establishes an isolated trusted execution environment based on AMD’s CPU security extensions, and it can be used to protect user security-sensitive code (called PAL19). Flicker can establish a security execution environment for PALs based on hardware isolation and extend PALs to dynamic PCRs to provide execution evidence of PALs’ running in the isolated environment for remote verifiers, thus achieving the remote attestation function. Compared with the static chain of trust, the typical feature of Flicker is its flexible establishment time and the small size of the TCB. As Flicker can establish a dynamic chain of trust at any time, it can build trusted execution environment for multiple user codes without rebooting system. The Flicker system adopts DRTM technology to establish chain of trust, which greatly reduces its TCB to only a little hardware and software, and thus reduces security risks caused by TCB.

Although Flicker provides fine-grained trusted execution environment and its TCB is small, it has great impact on system’s efficiency as it requires launching CPU’s security instruction to build isolated environment every time. For codes whose workload is small, the burden used to establish the isolated environment even accounts for half of the overall costs. To solve this problem, researchers in Cylab propose TrustVisor [104] system that is more convenient to use and more efficient. To reduce the size of TCB, TrustVisor is implemented as a simple hypervisor above the hardware layer, and only provides memory isolation, DMA protection and a micro-TPM (μTPM) with necessary interfaces such as Seal/Unseal, Extend and Quote. TrustVisor itself is protected by DRTM, and leverages memory isolation mechanism to ensure that each PAL can only be invoked by TrustVisor and creates a μTPM instance for each PAL. Any invocation or access to a PAL will be trapped into TrustVisor, which will control these accesses, and only correct invoking address is allowed to run the PAL. For illegal PAL invocations, TrustVisor will return error to calling applications. The memory isolation mechanism and μTPM provided by TrustVisor not only protect users’ sensitive code but also reduce burden on the system caused by DRTM, which makes it have good application prospect.

3.4.1Chain of Trust at Bootloader

Compared with static chain of trust, the trusted boot systems based on DRTM have smaller TCB and can avoid a lot of attacks on static chain of trust, thus providing a more secure environment for the launch of OS. Trusted boot is usually implemented as a kernel start-up entry, and it measures OS kernel in the secure environment established by DRTM after booting. In this way, the establishment of chain of trust will not contain BIOS and bootloader, so it prevents attacks leveraging bugs existing in BIOS and bootloader. As the RoT is based on DRTM, it prevents TPM reset attacks and BIOS replacement attack. It boots the OS following the below steps:

(1)The user selects trusted boot in the bootloader, which would trigger CPU’s security instruction. The security instruction will reset dynamic PCRs of TPM and perform security check mechanism of DRTM, such as TXT’s launch control policy, which is used to check whether components configuration meets security requirements.

(2)If the hardware configuration satisfies security requirements, the trusted execution environment has been built. The trusted boot system will run in this environment, and be extended to PCR17.

(3)The trusted boot system loads and measures the OS kernel and extends the measurement results to dynamic PCRs, and finally transfers execution control to the OS kernel.

3.4.2Chain of Trust in OS

The dynamic chain of trust in OS is mainly used to build trusted execution environment for user security-sensitive codes. As the TCB of trusted execution environment built by DRTM only consists of a little hardware and some initialization codes, and the OS is excluded from the TCB, the potential vulnerabilities of OS will not affect the security of user applications. Dynamic chain of trust will prohibit DMA access of external devices during its establishment, which provides hardware protection for user applications and avoids DMA attacks on system.

The architecture of dynamic chain of trust in OS layer is very simple, which is usually implemented as a kernel module that encapsulates user’s security-sensitive codes (PAL) to executable codes directly running in the isolated environment built by DRTM. As establishing isolated environment requires suspending OS and resuming OS at the ending of the isolated environment every time, the chain of trust system needs to add codes protecting OS states and resuming OS. The construction of the chain of trust in OS is as follows (for details, please refer to Figure 3.10):

(1)Acceptation of PAL: The establishment of dynamic chain of trust is triggered by a kernel module in the OS. The user sends execution request to the kernel module, and the request contains application code and its parameters.

(2)Initialization of PAL: After the kernel module receives the user’s request, it assembles the PAL, input parameters and other system function code and then generates the executable code block (PAL block) directly running in the isolated environment.

(3)Suspension of the OS: The execution of the DRTM security instruction will not save the context of current running environment. In order to return to context of current OS after the execution of the isolated environment built by DRTM, we need to save the context of OS, such as the kernel page tables, before executing the security instruction.

(4)Execution of security instruction: First, establish DMA protection mechanism to prevent DMA attacks on the isolated environment; second, disable interrupts to prevent the OS from regaining the control of current execution environment; third, reset dynamic PCRs, check hardware configuration following the default security policy and then execute the assembled PAL.

(5)Execution of the PAL: The security instruction will measure the PAL block in the end, extend the measurement results to TPM dynamic PCRs and then execute the PAL. In order to protect user’s privacy, PAL must clear private information generated during its execution.

(6)Extension of PCR: When the PAL completes its execution, a preset value will be extended to the PCR. This value indicates the end of execution of the isolated environment.

(7)Recovery of the OS: When the PAL completes its execution, the CPU will restore the context of OS and transfer the execution control to the kernel module that triggers the establishing procedure of chain of trust. The kernel module obtains the PAL’s output generated in the isolated environment and returns it to the user.

Figure 3.10: Building procedure of chain of trust in OS layer based on DRTM.

Although the dynamic chain of trust can achieve a relatively high-level security, the OS will be suspended while the user’s security code runs and the OS is always in the suspended state during the running of the isolated trusted execution environment, which makes the user unable to run other applications simultaneously. Since the code in the trusted execution environment no longer relies on the OS, the PAL must be self-contained and cannot rely on any code outside isolated environment. This increases the development difficulty of PALs.

3.5Chain of Trust for Virtualization Platforms

With the development of virtualization technology, the application scope of virtualization platforms becomes wider. However, traditional chains of trust are not applicable to virtualization platforms, so there is a need to customize chain of trust for virtualization platforms. On general platforms, each physical machine is equipped with an independent physical security chip, while the virtualization platform has only one security chip for all of the virtual machines. So there is no one-to-one relationship between the virtual machine and the security chip like general platforms.

In order to solve the above problem, TCG proposes Virtualized Trusted Platform Architecture Specification [105], which gives a common virtualization platform architecture that ensures that each guest virtual machine owns a dedicated virtual security chip. The virtualization platform architecture is divided into three layers: the top layer is virtual machines that are isolated from each other under the support of services provided by the virtual machine monitor (VMM). The second layer is the VMM, which provides services for virtual machines by virtualizing the hardware. The bottom is the hardware layer. Based on the above architecture, the specification [105] proposes a general architecture of virtualized trusted platform: The security chip and physical RTM on the hardware layer provide trusted computing for the VMM layer, and the VMM layer provides a virtual security chip and virtual RTM for every virtual machine in the upper layer by its internal virtualization platform manager (vPlatform Manager).

Due to its special architecture, the chain of trust system for virtualization platforms consists of two parts: the first part is the chain of trust from hardware to VMM, and the second part is the chain of trust from VMM to virtual machine. For the first part, we can leverage the security chip and the physical RTM on the hardware layer to establish a static chain of trust or a dynamic chain of trust from hardware to VMM.

In virtualization platforms, security chips and RTMs on the hardware layer establish the chain of trust from the hardware to VMM, which can guarantee the security of VMM. After VMM starts, it measures every instance of virtual security chip and virtual RTM; we can then extend the chain of trust to virtual machines. Every virtual machine does not know that it is in a virtualization platform, and it can establish its own chain of trust by leveraging the virtual security chip and virtual RTM just like the general platform. The above operations establish the whole chain of trust from the hardware layer to the virtual machine.

3.6Summary

This chapter introduces principles and methods proposed by the trusted computing technology, which aim to protect computer systems by establishing chain of trust on computer architecture. The trusted computing builds trusted execution environment for applications by measurement and transitive trust technologies and provides a fundamental solution for the security problems of computer system. The chain of trust protects the execution environment of terminals, and it can extend trustworthiness to network or remote verifiers by combining remote attestation technology.

The chain of trust can be categorized into static and dynamic chains of trust by the type of root of trust. The static chain of trust takes the first running code (CRTM) as the root of trust when the platform powers on and then establishes the whole chain of trust by measurement and transitive trust layer by layer. The dynamic chain of trust takes the DRTM technology provided by AMD or Intel’s security CPU and builds secure isolated execution environment with hardware protection mechanisms. The above two types of chains of trust have both advantages and disadvantages in usage: The static chain of trust can guarantee security of the whole system, but it can only be established when the platform powers on and it also has defects of big computation cost on measurement and complex integrity management. The dynamic chain of trust is much better than the static chain of trust in the aspects of establishment timing and size of TCB, but it still has some problems in the aspects of user experience and development difficulty. Overall, the trends of the technology of chain of trust are reducing TCB and protecting security of system at running time. With the popularization of virtualization technology and smartphones, chain of trust will have a wide application prospect in these new computation platforms.