Cache placement policies: Difference between revisions

Content deleted Content added
Watarok (talk | contribs)
m Example: Clarify wording
m task, replaced: ACM Sigarch Computer Architecture News → ACM SIGARCH Computer Architecture News
Line 1:
{{Short description|Design decisions affecting processor cache speeds and sizes}}
{{distinguish|Cache replacement policies}}
A [[CPU cache]] is a memory which holds the recently utilized data by the processor. A block of memory cannot necessarily be placed randomly in the cache and may be restricted to a single [[CPU cache#Cache entries|cache line]] or a set of cache lines<ref name=":0">{{Cite web|url=https://cseweb.ucsd.edu/classes/su07/cse141/cache-handout.pdf|title=The Basics of Cache}}</ref> by the '''cache placement policy'''.<ref>{{Cite web|url=http://web.cs.iastate.edu/~prabhu/Tutorial/CACHE/bl_place.html|title=Cache Placement Policies}}</ref><ref>{{Cite web|url=http://fourier.eng.hmc.edu/e85_old/lectures/memory/node4.html|title=Placement Policies}}</ref> In other words, the cache placement policy determines where a particular memory block can be placed when it goes into the cache.
 
There are three different policies available for placement of a memory block in the cache: direct-mapped, fully associative, and set-associative. Originally this space of cache organizations was described using the term "congruence mapping".<ref>{{Cite journal|last=Mattson|first=R.L.|author1-link=Richard Mattson|last2=Gecsei|first2=J.|last3=Slutz|first3=D. R.|last4=Traiger|first4=I|date=1970|title= Evaluation Techniques for Storage Hierarchies|journal=IBM Systems Journal|volume=9|issue=2|pages=78–117|doi=10.1147/sj.92.0078}}</ref>
Line 102:
=== To place a block in the cache ===
* The set is determined by the index bits derived from the address of the memory block.
* The memory block is placed in an available cache line in the set identified, and the tag is stored in the tag field associated with the line. If all the cache lines in the set are occupied, then the new data replaces the block identified through the [[CPU_cacheCPU cache#Replacement_policiesReplacement policies|replacement policy]].
 
=== To locate a word in the cache ===
Line 132:
 
==Two-way skewed associative cache==
Other schemes have been suggested, such as the ''skewed cache'',<ref name="Seznec">{{cite journal|author=André Seznec|year=1993|title=A Case for Two-Way Skewed-Associative Caches|journal=ACM SigarchSIGARCH Computer Architecture News|volume=21|issue=2|pages=169–178|doi=10.1145/173682.165152}}</ref> where the index for way 0 is direct, as above, but the index for way 1 is formed with a [[hash function]]. A good hash function has the property that addresses which conflict with the direct mapping tend not to conflict when mapped with the hash function, and so it is less likely that a program will suffer from an unexpectedly large number of conflict misses due to a pathological access pattern. The downside is extra latency from computing the hash function.<ref name="CK">{{cite web|url=http://www.stanford.edu/class/ee282/08_handouts/L03-Cache.pdf|title=Lecture 3: Advanced Caching Techniques|author=C. Kozyrakis|archive-url=https://web.archive.org/web/20120907012034/http://www.stanford.edu/class/ee282/08_handouts/L03-Cache.pdf|archive-date=September 7, 2012}}</ref> Additionally, when it comes time to load a new line and evict an old line, it may be difficult to determine which existing line was least recently used, because the new line conflicts with data at different indexes in each way; [[Cache algorithms|LRU]] tracking for non-skewed caches is usually done on a per-set basis. Nevertheless, skewed-associative caches have major advantages over conventional set-associative ones.<ref>
[http://www.irisa.fr/caps/PROJECTS/Architecture/ Micro-Architecture] "Skewed-associative caches have ... major advantages over conventional set-associative caches."
</ref>
Line 150:
== References ==
{{Reflist}}
 
 
 
[[Category:Cache (computing)]]