Content deleted Content added
Guy Harris (talk | contribs) →Disadvantages: Is a fully-associative cache implemented with associative memory? If so, then is iteration needed? Point to the talk page section where somebody argued that it's not needed. |
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.9.5 |
||
(28 intermediate revisions by 17 users not shown) | |||
Line 1:
{{Short description|Design decisions affecting processor cache speeds and sizes}}
{{Distinguish|cache replacement policies}}
There are three different policies available for placement of a memory block in the cache: direct-mapped, fully associative, and set-associative. Originally this space of cache organizations was described using the term "congruence mapping".<ref>{{Cite journal|last=Mattson|first=R.L.|author1-link=Richard Mattson|last2=Gecsei|first2=J.|last3=Slutz|first3=D. R.|last4=Traiger|first4=I|date=1970|title= Evaluation Techniques for Storage Hierarchies|journal=IBM Systems Journal|volume=9|issue=2|pages=78–117|doi=10.1147/sj.92.0078}}</ref>
== Direct-mapped cache ==
In a direct-mapped cache structure, the cache is organized into multiple sets<ref name=":0" /> with a single cache line per set. Based on the address of the memory block, it can only occupy a single cache line. The cache can be framed as a
=== To place a block in the cache ===
Line 20:
* This placement policy is power efficient as it avoids the search through all the cache lines.
* The placement policy and the [[CPU cache#Replacement policies|replacement policy]] is simple.
*
=== Disadvantage ===
* It has lower cache hit rate, as there is only one cache line available in a set. Every time a new memory is referenced to the same set, the cache line is replaced, which causes conflict miss.<ref>{{Cite web|url=http://meseec.ce.rit.edu/eecc551-winter2001/551-1-30-2002.pdf|title=Cache Miss Types|access-date=2016-10-24|archive-date=2016-11-30|archive-url=https://web.archive.org/web/20161130184519/http://meseec.ce.rit.edu/eecc551-winter2001/551-1-30-2002.pdf|url-status=dead}}</ref>
=== Example ===
Line 45:
== Fully associative cache ==
In a fully associative cache, the cache is organized into a single cache set with multiple cache lines. A memory block can occupy any of the cache lines. The cache organization can be framed as
=== To place a block in the cache ===
* The cache line is selected based on the [[CPU cache#Flag bits|valid bit]]<ref name=":0" /> associated with it. If the valid bit is 0, the new memory block can be placed in the cache line, else it has to be placed in another cache line with valid bit 0.
* If the cache is completely occupied then a block is evicted and the memory block is placed in that cache line.
* The eviction of memory block from the cache is decided by the [[CPU cache#Replacement policies|replacement policy]].<ref>{{Cite web|url=
=== To search a word in the cache ===
* The Tag field of the memory address is compared with tag bits associated with all the cache lines. If it matches, the block is present in the cache and is a cache hit. If it
* Based on the Offset, a byte is selected and returned to the processor.
[[File:Fully-Associative Cache Snehal Img.png|thumb|513x513px|Fully associative cache]]
Line 63:
=== Disadvantages ===
* The placement policy is power hungry as
▲* The placement policy is power hungry as it has to iterate over entire cache set to locate a block.
* The most expensive of all methods, due to the high cost of associative-comparison hardware.
=== Example ===
Consider a main memory of 16 kilobytes, which is organized as 4-byte blocks, and a fully associative cache of 256 bytes and a block size of 4 bytes. Because the main memory is 16kB, we need a minimum of 14 bits to uniquely represent a memory address.
The incoming address to the cache is divided into bits for offset and tag.
Line 82 ⟶ 81:
Set-associative cache is a trade-off between direct-mapped cache and fully associative cache.
A set-associative cache can be imagined as a
The range of caches from direct-mapped to fully associative is a continuum of levels of set associativity. (A direct-mapped cache is one-way set-associative and a fully associative cache with ''m'' cache lines is ''m''-way set-associative.)
Line 94 ⟶ 93:
=== To locate a word in the cache ===
* The set is determined by the index bits derived from the address of the memory block.
* The tag bits are compared with the tags of all cache lines present in selected set. If the tag matches any of the cache lines, it is a cache hit and the appropriate line is returned. If the tag
=== Advantages ===
Line 106 ⟶ 105:
Consider a main memory of 16 kilobytes, which is organized as 4-byte blocks, and a 2-way set-associative cache of 256 bytes with a block size of 4 bytes. Because the main memory is 16kB, we need a minimum of 14 bits to uniquely represent a memory address.
Since each cache block is of size 4 bytes and is 2-way set-associative, the total number of sets in the cache is 256/(4 * 2), which equals 32 sets.
[[File:Set-Associative Cache Snehal Img.png|thumb|578x578px|Set-Associative Cache]] The incoming address to the cache is divided into bits for Offset, Index and Tag. * ''Offset'' corresponds to the bits used to determine the byte to be accessed from the cache line. Because the cache lines are 4 bytes long, there are ''2 offset bits''.
Line 117 ⟶ 118:
# Address <code>0x0004</code> (tag - <code>0b000_0000</code>, index – <code>0b0_0001</code>, offset – <code>0b00</code>) corresponds to block 1 of the memory and maps to the set 1 of the cache. The block occupies a cache line in set 1, determined by the replacement policy for the cache.
# Address <code>0x00FF</code> (tag – <code>0b000_0001</code>, index – <code>0b1_1111</code>, offset – <code>0b11</code>) corresponds to block 63 of the memory and maps to the set 31 of the cache. The block occupies a cache line in set 31, determined by the replacement policy for the cache.
# Address <code>0x0100</code> (tag – <code>
== Two-way skewed associative cache ==
Other schemes have been suggested, such as the ''skewed cache'',<ref name="Seznec">{{cite journal|author=André Seznec|author-link=André Seznec|year=1993|title=A Case for Two-Way Skewed-Associative Caches|journal=ACM SIGARCH Computer Architecture News|volume=21|issue=2|pages=169–178|doi=10.1145/173682.165152|doi-access=free}}</ref> where the index for way 0 is direct, as above, but the index for way 1 is formed with a [[hash function]]. A good hash function has the property that addresses which conflict with the direct mapping tend not to conflict when mapped with the hash function, and so it is less likely that a program will suffer from an unexpectedly large number of conflict misses due to a pathological access pattern. The downside is extra latency from computing the hash function.<ref name="CK">{{cite web|url=http://www.stanford.edu/class/ee282/08_handouts/L03-Cache.pdf|title=Lecture 3: Advanced Caching Techniques|author=C. Kozyrakis|author-link=Christos Kozyrakis|archive-url=https://web.archive.org/web/20120907012034/http://www.stanford.edu/class/ee282/08_handouts/L03-Cache.pdf|archive-date=September 7, 2012|url-status=dead}}</ref> Additionally, when it comes time to load a new line and evict an old line, it may be difficult to determine which existing line was least recently used, because the new line conflicts with data at different indexes in each way; [[Cache algorithms|LRU]] tracking for non-skewed caches is usually done on a per-set basis. Nevertheless, skewed-associative caches have major advantages over conventional set-associative ones.<ref>
[http://www.irisa.fr/caps/PROJECTS/Architecture/ Micro-Architecture] "Skewed-associative caches have ... major advantages over conventional set-associative caches."
</ref>
|