Trusted execution environment: Difference between revisions

Content deleted Content added
Tags: Mobile edit Mobile app edit Android app edit App select source
Details: Added link.
Tags: Mobile edit Mobile app edit Android app edit App select source
 
Line 20:
To prevent the simulation of hardware with user-controlled software, a so-called "hardware root of trust" is used. This is a [[Trusted_computing#Endorsement_key|set of private keys that are embedded directly into the chip during manufacturing]]; one-time programmable memory such as [[eFuse]]s is usually used on mobile devices. These cannot be changed, even after the device resets, and whose public counterparts reside in a manufacturer database, together with a non-secret hash of a public key belonging to the trusted party (usually a chip vendor) which is used to sign trusted firmware alongside the circuits doing cryptographic operations and controlling access.
 
The hardware is designed in a way that prevents all software not signed by the trusted party's key from accessing the privileged features. The public key of the vendor is provided at runtime and hashed; this hash is then compared to the one embedded in the chip. If the hash matches, the public key is used to verify a [[digital signature]] of trusted vendor-controlled firmware (such as a [[Booting process of Android devices|chain of bootloaders on Android devices]] or 'architectural enclaves' in SGX). The trusted firmware is then used to implement remote attestation.<ref>{{Cite web|url=https://www.researchgate.net/publication/342833256|title=Towards Formalization of Enhanced Privacy ID (EPID)-based Remote Attestation in Intel SGX}}</ref>
 
When an application is attested, its untrusted components loads its trusted component into memory; the trusted application is protected from modification by untrusted components with hardware. A [[Cryptographic nonce|nonce]] is requested by the untrusted party from the verifier's server and is used as part of a cryptographic authentication protocol, proving integrity of the trusted application. The proof is passed to the verifier, which verifies it. A valid proof cannot be computed in simulated hardware (i.e. [[QEMU]]) because in order to construct it, access to the keys baked into hardware is required; only trusted firmware has access to these keys and/or the keys derived from them or obtained using them. Because only the platform owner is meant to have access to the data recorded in the foundry, the verifying party must interact with the service set up by the vendor. If the scheme is implemented improperly, the chip vendor can track which applications are used on which chip and selectively deny service by returning a message indicating that authentication has not passed.<ref>{{cite web | url=https://optee.readthedocs.io/en/latest/building/devices/qemu.html | title=QEMU v7 — OP-TEE documentation documentation | access-date=2022-06-02 | archive-date=2022-06-25 | archive-url=https://web.archive.org/web/20220625012352/https://optee.readthedocs.io/en/latest/building/devices/qemu.html | url-status=live }}</ref>