Computer architecture

This is an old revision of this page, as edited by Mudlock (talk | contribs) at 15:29, 15 April 2002. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Computer architecture refers to the theory behind the design of a computer. In the same way as a building architect sets the principles and goals of a building project as the basis for the draftsman's plans, so too, a computer architect sets out the Computer Architecture as a basis for the actual design specifications.

There are two customary usages of the term:

The more academic usage refers to the design of a computer's underlying language - its "instruction set." This will include information such as whether the computer's processor can compute the product of two numbers without resorting to external memory. It will also include a nominal precision for the computer's computations.

The less formal usage refers to a description of the requirements (especially speeds and interconnection requirements) or design implimentation for the various parts of a computer. (Such as memory, motherboard, electronic peripherals, or most commonly the central processing unit.)


Design Goals

The most common goals in a Computer Architecture revolve around the tradeoffs between cost and performance (i.e. speed), although other considerations, such as size, weight, and power consumption, may be a factor as well.

1. Cost

Generally, cost is held constant, determined by either system or commercial requirements, and speed and storage capacity are adjusted to meet the cost target.

2. Performance

Computer retailers describe the performance of their machines in terms of clock speed (usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have higher performance. Most modern CPUs are capable of executing multiple instructions per clock cycle (see superscalar), which can have a dramatic effect on how quickly a program can run, as can other factors, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs being run.

Better metrics can be obtained by benchmarking, which attempts to take all these factors into account by measuring the time the system takes to run through a series of provided programs. This can be used to obtain an average throughput for the system. Even still, benchmarking may not show that one of two systems is clearly better, since one system may, for example, be optomized to handle scientific applications, and another may be optomized to play popular video games.

Another important performance consideratin is interrupt latency, which is the guaranteed maximum response time of the system to an event such as the click of a mouse or the reception of data by a modem. This number is also affected by a very wide range of design choices. Computers that control machinery usually need low interrupt latencies, because the machine can't, won't or should not wait. For example, computer-controlled anti-lock brakes should not wait for the computer to finish what it's doing - they should brake.

The general scheme of optimization is to budget the different parts of the computer system seperately. In a balanced computer system, the data rate will be constant for all parts of the system, and cost will be allocated proportionally to assure this. The exact form of the computer system will depend on the constraints and goals it was optimized for.

Also see CPU design.