Memory hierarchy: Difference between revisions

Content deleted Content added
a stub
 
ballpark figures for latency and size of storage levels
Line 1:
The [[hierarchical]] arrangement of [[storage]] in current [[computer architecture]]s is called the '''memory hierarchy'''. Each level of the hierarchy has shorteris accessof timeshigher speed and fasterlower datalatency, transferand ratestis thanof thesmaller nextsize, one down.than lower levels.
 
Most modern [[CPU]]s are so fast that for most program workloads the [[locality of reference]] of memory accesses, and the efficiency of the [[caching]] and memory transfer between different levels of the hierarchy, is the practical limitation on processing speed. As a result, the CPU spends much of its time idling, waiting for memory I/O to complete.
Line 5:
The memory hierarchy in most computers is as follows:
 
* [[CPU register]]s (fastest possible access, only dozens of bytes at most)
* [[Level 1 cache]] (often accessed in just a few cycles, usually tens of kilobytes)
* [[Level 2 cache]] (higher latency than L1 by 2x-10x, often 512KB or more)
* [[DRAM]] (may take hundreds of cycles, but can be multiple gigabytes)
* [[DRAM]]
* [[Disk storage]] (hundreds of thousands of cycles latency, but very large)
 
See also: