Cache placement policies: Difference between revisions

Content deleted Content added
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.9.5
 
(5 intermediate revisions by 4 users not shown)
Line 20:
* This placement policy is power efficient as it avoids the search through all the cache lines.
* The placement policy and the [[CPU cache#Replacement policies|replacement policy]] is simple.
* ItSimple requiresand cheaplow-cost hardware can be used, as only one tag needs to be checked at a time.
 
=== Disadvantage ===
* It has lower cache hit rate, as there is only one cache line available in a set. Every time a new memory is referenced to the same set, the cache line is replaced, which causes conflict miss.<ref>{{Cite web|url=http://meseec.ce.rit.edu/eecc551-winter2001/551-1-30-2002.pdf|title=Cache Miss Types|access-date=2016-10-24|archive-date=2016-11-30|archive-url=https://web.archive.org/web/20161130184519/http://meseec.ce.rit.edu/eecc551-winter2001/551-1-30-2002.pdf|url-status=dead}}</ref>
 
=== Example ===
Line 48:
 
=== To place a block in the cache ===
* The cache line is selected based on the [[CPU cache#Flag bits|valid bit]]<ref name=":0" /> associated with it. If the valid bit is 0, the new memory block can be placed in the cache line, else it has to be placed in another cache line with valid bit 0.
* If the cache is completely occupied then a block is evicted and the memory block is placed in that cache line.
* The eviction of memory block from the cache is decided by the [[CPU cache#Replacement policies|replacement policy]].<ref>{{Cite web|url=http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Memory/fully.html|title=Fully Associative Cache|archive-url=https://web.archive.org/web/20171224054857/http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Memory/fully.html|archive-date=December 24, 2017|url-status=dead}}</ref>
 
=== To search a word in the cache ===
Line 121:
 
== Two-way skewed associative cache ==
Other schemes have been suggested, such as the ''skewed cache'',<ref name="Seznec">{{cite journal|author=André Seznec|author-link=André Seznec|year=1993|title=A Case for Two-Way Skewed-Associative Caches|journal=ACM SIGARCH Computer Architecture News|volume=21|issue=2|pages=169–178|doi=10.1145/173682.165152|doi-access=free}}</ref> where the index for way 0 is direct, as above, but the index for way 1 is formed with a [[hash function]]. A good hash function has the property that addresses which conflict with the direct mapping tend not to conflict when mapped with the hash function, and so it is less likely that a program will suffer from an unexpectedly large number of conflict misses due to a pathological access pattern. The downside is extra latency from computing the hash function.<ref name="CK">{{cite web|url=http://www.stanford.edu/class/ee282/08_handouts/L03-Cache.pdf|title=Lecture 3: Advanced Caching Techniques|author=C. Kozyrakis|author-link=Christos Kozyrakis|archive-url=https://web.archive.org/web/20120907012034/http://www.stanford.edu/class/ee282/08_handouts/L03-Cache.pdf|archive-date=September 7, 2012|url-status=dead}}</ref> Additionally, when it comes time to load a new line and evict an old line, it may be difficult to determine which existing line was least recently used, because the new line conflicts with data at different indexes in each way; [[Cache algorithms|LRU]] tracking for non-skewed caches is usually done on a per-set basis. Nevertheless, skewed-associative caches have major advantages over conventional set-associative ones.<ref>
[http://www.irisa.fr/caps/PROJECTS/Architecture/ Micro-Architecture] "Skewed-associative caches have ... major advantages over conventional set-associative caches."
</ref>