Trusted computing base: Difference between revisions

Content deleted Content added
See also: Remove red linked Zero Trust.
Grammar edits
Line 3:
{{distinguish|Trusted Computing}}
 
The '''trusted computing base''' ('''TCB''') of a [[computer system]] is the set of all [[Computer hardware|hardware]], [[firmware]], and/or [[software]] components that are critical to its [[computer security|security]], in the sense that [[Software bug|bugs]] or [[Vulnerability (computing)|vulnerabilities]] occurring inside the TCB might jeopardize the security properties of the entire system. By contrast, parts of a computer system that lie outside the TCB must not be able to misbehave in a way that would leak any more [[privilege (computer science)|privilege]]s than are granted to them in accordance to the system's [[security policy]].
 
The careful design and implementation of a system's trusted computing base is paramount to its overall security. Modern [[operating system]]s strive to reduce the size of the TCB{{Citation needed lead|date=February 2019}} so that an exhaustive examination of its code base (by means of manual or computer-assisted [[software audit review|software audit]] or [[program verification]]) becomes feasible.
Line 31:
: ''<nowiki>[t]</nowiki>he ability of a trusted computing base to enforce correctly a unified security policy depends on the correctness of the mechanisms within the trusted computing base, the protection of those mechanisms to ensure their correctness, and the correct input of parameters related to the security policy.''
 
In other words, a given piece of hardware or software is a part of the TCB if and only if it has been designed to be a part of the mechanism that provides its security to the computer system. In [[operating system]]s, this typically consists of the kernel (or [[microkernel]]) and a select set of system utilities (for example, [[setuid]] programs and [[Daemon (computer software)|daemons]] in UNIX systems). In [[programming language]]s thatdesigned havewith built-in security features designed in, such as [[Java (programming language)|Java]] and [[E (programming language)|E]], the TCB is formed of the language runtime and standard library.<ref>M. Miller, C. Morningstar and B. Frantz, [http://www.erights.org/elib/capability/ode/ode-linear.html Capability-based Financial Instruments (An Ode to the Granovetter diagram)], in paragraph ''Subjective Aggregation''.</ref>
 
==Properties==
Line 46:
 
===Software parts of the TCB need to protect themselves===
As outlined by the aforementioned Orange Book, software portions of the trusted computing base need to protect themselves against tampering to be of any effect. This is due to the [[von Neumann architecture]] implemented by virtually all modern computers: since [[machine code]] can be processed as just another kind of data, it can be read and overwritten by any program. barringThis can be prevented by special [[memory management]] provisions that subsequently have to be treated as part of the TCB. Specifically, the trusted computing base must at least prevent its own software from being written to.
 
In many modern [[CPU]]s, the protection of the memory that hosts the TCB is achieved by adding in a specialized piece of hardware called the [[memory management unit]] (MMU), which is programmable by the operating system to allow and deny a running program's access to specific ranges of the system memory to the programs being run. Of course, the operating system is also able to disallow such programming to the other programs. This technique is called [[supervisor mode]]; compared to more crude approaches (such as storing the TCB in [[Read-only memory|ROM]], or equivalently, using the [[Harvard architecture]]), it has the advantage of allowing the security-critical software to be upgraded in the field, although allowing secure upgrades of the trusted computing base poses bootstrap problems of its own.<ref>[http://citeseer.ist.psu.edu/article/arbaugh97secure.html A Secure and Reliable Bootstrap Architecture], ''op. cit.''</ref>
 
===Trusted vs. trustworthy===
As stated [[#A prerequisite to security|above]], trust in the trusted computing base is required to make any progress in ascertaining the security of the computer system. In other words, the trusted computing base is “trusted” first and foremost in the sense that it ''has'' to be trusted, and not necessarily that it is trustworthy. Real-world operating systems routinely have security-critical bugs discovered in them, which attests ofto the practical limits of such trust.<ref>[[Bruce Schneier]], [http://www.schneier.com/crypto-gram-0103.html#1 The security patch treadmill] (2001)</ref>
 
The alternative is formal [[software verification]], which uses mathematical proof techniques to show the absence of bugs. Researchers at [[NICTA]] and its spinout [[Open Kernel Labs]] have recently performed such a formal verification of [http://ssrg.nicta.com.au/projects/seL4/ seL4], a member of the [[L4 microkernel|L4 microkernel family]], proving functional correctness of the C implementation of the kernel.<ref Name="Klein_EHACDEEKNSTW_09">