Cache (computing): Difference between revisions

Content deleted Content added
Reverted good faith edits by Wizmut (talk): Not an improvement
review: rm rep for example. combine short related paras.
Line 46:
* A write-through cache uses no-write allocate. Here, subsequent writes have no advantage, since they still need to be written directly to the backing store.
 
Entities other than the cache may change the data in the backing store, in which case the copy in the cache may become out-of-date or ''stale''. Alternatively, when the client updates the data in the cache, copies of those data in other caches will become stale. Communication protocols between the cache managers that keep the data consistent are associated with [[cache coherence]].<!--[[User:Kvng/RTH]]-->
 
===Prefetch===
Line 52:
{{see|Memory paging#Page replacement techniques}}
 
On a cache read miss, caches with a ''[[demand paging]] policy'' read the minimum amount from the backing store. ForA example,typical demand-paging virtual memory implementation reads one page of virtual memory (often 4&nbsp;KB) from disk into the disk cache in RAM. For example, aA typical CPU reads a single L2 cache line of 128&nbsp;bytes from DRAM into the L2 cache, and a single L1 cache line of 64&nbsp;bytes from the L2 cache into the L1 cache.
 
Caches with a [[prefetch input queue]] or more general ''anticipatory paging policy'' go further—they not only read the data requested, but guess that the next chunk or two of data will soon be required, and so prefetch that data into the cache ahead of time. Anticipatory paging is especially helpful when the backing store has a long latency to read the first chunk and much shorter times to sequentially read the next few chunks, such as [[disk storage]] and DRAM.
 
A few operating systems go further with a [[loader (computing)|loader]] that always pre-loads the entire executable into RAM. A few caches go even further, not only pre-loading an entire file, but also starting to load other related files that may soon be requested, such as the [[page cache]] associated with a [[prefetcher]] or the [[web cache]] associated with [[link prefetching]].<!--[[User:Kvng/RTH]]-->
 
A few caches go even further, not only pre-loading an entire file, but also starting to load other related files that may soon be requested, such as the [[page cache]] associated with a [[prefetcher]] or the [[web cache]] associated with [[link prefetching]].
 
=={{anchor|HARDWARE}}Examples of hardware caches==