'''Computer architecture''' refers to the theory behind the actual design of a computer. In the same way as a building architect sets the principles and goals of a building project as the basis for the draftsman's plans, so too, a computer architect sets out the Computer Architecture as a basis for the actual design specifications.
There are two customary usages of the term:
The more academic usage refers to the design of a computer's underlying language - its "[[instruction set]]." This will include information such as whether the computer's processor can compute the product of two numbers without resorting to external memory. It will also include a nominal precision for the computer's computations.
The less formal usage refers to a description of the designrequirements of(especially grossspeeds and interconnection requirements) or design implimentation for the various parts of a computer. (Such as [[computer memory|memory]], especially[[motherboard]], speeds[[electronic]] [[peripheral]]s, andor interconnectionmost requirementscommonly the [[central processing unit]].)
'''Design Goals'''
The most common goals ofin a Computer Architecture include: revolve around the tradeoffs between cost and performance (i.e. speed), although other considerations, such as size and weight, may be a factor as well.
1. Cost
Generally, cost is held constant, determined by either system or commercial requirements, and speed and storage capacity are adjusted to meet the cost target.
2. Performance (speed)
Computer retailers describe the performance of their machines in terms of [[CPU]]clock [[Speed]]speed (usually in MHz or GHz). This refers to the cycles per second of the main clock (megahertzof orthe MHzCPU. However, this metric is millonssomewhat misleading, gigahertzas ora GHzmachine iswith billions).a higher clock Itrate doesmay not indicatenecessarily thehave numberhigher of instructionsperformance. Most Somemodern CPUs executeare lesscapable thanof oneexecuting instructionmultiple instructions per clock. cycle Some(see "[[superscalar]]"), CPUswhich executecan morehave thana onedramatic instructioneffect peron clockhow quickly a program can run, as can other factors, such as the mix of [[functional unit]]s, [[computer bus|bus]] speeds, available memory, and the type and order of instructions in the programs being run.
Better metrics can be obtained by [[benchmark]]ing, which attempts to take all these factors into account by measure the time that the system takes to run through a series of provided programs. This can be used to obtain an average throughput for the system. Even still, benchmarks may not clearly show which of two systems is better, since one system may, for example, be optomized to handle scientific applications, and another may be optomized to play popular video games.
Throughput is the absolute processing power of the computer system. In the most computer systems, throughput is limited to the speed of the slowest piece of hardware in use at a given time. This slowest piece might be input and output (I/O), the CPU, the memory chips themselves, or the connection (or "bus") between the memory, the CPU and the I/O. The limit most acceptable to users is the speed of the input, because the computer then seems infinitely fast. General-purpose computers like PCs usually maximize throughput to attempt to increase user satisfaction.
"[[Interrupt latency]]" is the guaranteed maximum response time of the software to an event such as the click of a mouse or the reception of data by a modem. This number is affected by a very wide range of design choices. Computers that control machinery usually need low interrupt latencies, because the machine can't, won't or should not wait. For example, computer-controlled anti-lock brakes should not wait for the computer to finish what it's doing- they should brake.
|