Content deleted Content added
Sourcing improved throughout this segment. |
Adding multiple citations. |
||
Line 79:
A private cache is assigned to one particular core in a processor, and cannot be accessed by any other cores. In some architectures, each core has its own private cache; this creates the risk of duplicate blocks in a system's cache architecture, which results in reduced capacity utilization. However, this type of design choice in a multi-layer cache architecture can also lend itself to a lower data-access latency.<ref name=":0" /><ref>{{Cite web|url=https://software.intel.com/en-us/articles/software-techniques-for-shared-cache-multi-core-systems|title=Software Techniques for Shared-Cache Multi-Core Systems|last=|first=|date=|website=|publisher=|access-date=}}</ref><ref>{{Cite web|url=http://hpcaconf.org/hpca13/papers/002-dybdahl.pdf|title=An Adaptive Shared/Private NUCA Cache Partitioning Scheme for Chip Multiprocessors|last=|first=|date=|website=|publisher=|access-date=}}</ref>
A shared cache is a cache which can be accessed by multiple cores.<ref>Akanksha Jain; Calvin Lin; 2019. Cache Replacement Policies. Morgan & Claypool Publishers. p. 45. ISBN 978-1-68173-577-1.</ref> Since it is shared, each block in the cache is unique and therefore has a larger hit rate as there will be no duplicate blocks. However, data-access latency can increase as multiple cores try to access the same cache.<ref>David Culler; Jaswinder Pal Singh; Anoop Gupta; 1999. Parallel Computer Architecture: A Hardware/Software Approach. Gulf Professional Publishing. p. 436. ISBN 978-1-55860-343-1.</ref>
In [[multi-core processor]]s, the design choice to make a cache shared or private impacts the performance of the processor.<ref name="Keckler (2009)">Stephen W. Keckler; Kunle Olukotun; H. Peter Hofstee; 2009. Multicore Processors and Systems. Springer Science & Business Media. p. 182. ISBN 978-1-4419-0263-4.</ref> In practice, the upper-level cache L1 (or sometimes L2)<ref name=":2" /><ref name=":3" /> is implemented as private and lower-level caches are implemented as shared. This design provides high access rates for the high-level caches and low miss rates for the lower-level caches.<ref name="Keckler (2009)" />
== Recent implementation models ==
|