Cache (computing): Difference between revisions

Content deleted Content added
review: ce for clarity
Line 31:
During a cache miss, some other previously existing cache entry is typically removed in order to make room for the newly retrieved data. The [[Heuristic (computer science)|heuristic]] used to select the entry to replace is known as the [[Cache replacement policies|replacement policy]]. One popular replacement policy, least recently used (LRU), replaces the oldest entry, the entry that was accessed less recently than any other entry. More sophisticated caching algorithms also take into account the frequency of use of entries.
 
==={{Anchor|Dirty|WRITEPOLICIES|WRITE-BACK|WRITE-BEHIND|WRITE-THROUGH|WRITE-AROUND}}WritingWrite policies===
[[File:Write-through with no-write-allocation.svg|thumb|380px|A write-through cache without write allocation]]
[[File:Write-back with write-allocation.svg|thumb|500px|A write-back cache with write allocation]]
 
When a systemCache writes data to cache, it must ateventually somebe point write that datapropagated to the backing store as well. The timing offor this write is controlledgoverned by what is known as the ''write policy''. There areThe two basicprimary writingwrite approachespolicies are:<ref>{{Cite web|url=https://www.linuxjournal.com/article/7105|title=Understanding Caching|last=Bottomley|first=James|date=2004-01-01|website=Linux Journal|access-date=2019-10-01}}</ref>
* ''Write-through'': writeWrites isare doneperformed synchronously to both to the cache and to the backing store.
* ''Write-back'': initiallyInitially, writing is done only to the cache. The write to the backing store is postponed until the modified content is about to be replaced by another cache block.
 
A write-back cache is more complex to implement since it needs to track which of its locations have been written over and mark them as ''dirty'' for later writing to the backing store. The data in these locations are written back to the backing store only when they are evicted from the cache, a process referred to as a ''lazy write''. For this reason, a read miss in a write-back cache will oftenmay require two memory backing store accesses to servicethe backing store: one for theto write back the dirty data, and one to retrieve the neededrequested data. Other policies may also trigger data write-back. The client may make many changes to data in the cache, and then explicitly notify the cache to write back the data.
 
SinceWrite nooperations datado isnot returnedreturn todata. the requester on write operationsConsequently, a decision needs to be made for write misses: whether or not datato wouldload bethe loadeddata into the cache. onThis is determined by these ''write-miss misses.policies'':
* ''Write allocate'' (also called ''fetch on write''): dataData at the missed-write ___location is loaded to cache, followed by a write-hit operation. In this approach, write misses are similar to read misses.
* ''No-write allocate'' (also called ''write-no-allocate'' or ''write around''): dataData at the missed-write ___location is not loaded to cache, and is written directly to the backing store. In this approach, data is loaded into the cache on read misses only.
 
BothWhile write-through andboth write-back policies can useImplement either of these write-miss policiespolicy, but usually they are typically paired. as follows:<ref name="HennessyPatterson2011">{{cite book|last1=Hennessy|first1=John L.|url=https://books.google.com/books?id=v3-1hVwHnHwC&pg=SL2-PA12|title=Computer Architecture: A Quantitative Approach|last2=Patterson|first2=David A.|publisher=Elsevier|year=2011|isbn=978-0-12-383872-8|page=B–12|language=en}}</ref><ref>{{cite book|title=Computer Architecture A Quantitative Approach|last1=Patterson|first1=David A.|last2=Hennessy|first2=John L.|isbn=1-55860-069-8|date=1990|page=413|publisher=Morgan Kaufmann Publishers}}</ref>
* A write-back cache usestypically employs write allocate, hopinganticipating forthat subsequent writes (or even reads) to the same ___location, whichwill isbenefit nowfrom having the data already in the cachedcache.
* A write-through cache uses no-write allocate. Here, subsequent writes have no advantage, since they still need to be written directly to the backing store.
 
Entities other than the cache may change the data in the backing store, in which case the copy in the cache may become out-of-date or ''stale''. Alternatively, when the client updates the data in the cache, copies of thosethat data in other caches will become stale. Communication protocols between the cache managers that keep the data consistent are associated with [[cache coherence]].
 
===Prefetch===