Commodity computing: Difference between revisions

Content deleted Content added
No edit summary
STBot (talk | contribs)
m clean up - bother me at my talk if I mess up using AWB
Line 11:
During the 1980s microcomputers began displacing "real" computers in a serious way. At first, price was the key justification but by the mid 1980s, semiconductor technology had evolved to the point where microprocessor performance began to eclipse the performance of discrete logic designs. These traditional designs were limited by speed-of-light delay issues inherent in any CPU larger than a single chip, and performance alone began driving the success of microprocessor-based systems.
 
The old processor architectures began to fall, first minis, then [[supermini|superminis]]s, and finally [[mainframe|mainframes]]s. By the mid 1990s, every computer made was based on a microprocessor, and most were microcomputers compatible with IBM PC. Although there was a time when every traditional computer manufacturer had its own proprietary micro-based designs there are only a few manufacturers of non-commodity computer systems today. However, super microcomputers (large-scale computer systems based on one or more microprocessors, like those of the IBM p, i, and z series) still own the high-end of the market.
 
As the power of microprocessors continues to increase, there are fewer and fewer business computing needs that cannot be met with off-the shelf commodity computers. It is likely that the low end of the supermicrocomputer genre will continue to be pushed upward by increasingly powerful commodity microcomputers. There will be fewer non-commodity systems sold each year, resulting in fewer and fewer dollars available for non-commodity R&D, resulting in a continually narrowing performance gap between commodity microcomputers and proprietary supermicros.