Cache hierarchy: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
m Alter: isbn, date. Add: oclc. Removed URL that duplicated unique identifier. Removed accessdate with no specified URL. Removed parameters. | You can use this bot yourself. Report bugs here. | Activated by User:Neko-chan | Category:Computer memory | via #UCB_Category
m Grammar
Line 1:
'''Cache hierarchy,''' or '''multi-level caches''', refers to a memory architecture whichthat uses a hierarchy of memory stores based on varying access speeds to cache data. Highly-requested data is cached in high-speed access memory stores, allowing swifter access by [[central processing unit]] (CPU) cores.
 
Cache hierarchy is a form and part of [[memory hierarchy]], and can be considered a form of [[tiered storage]].<ref name="CA:QA">{{cite book |last1=Hennessy |first1=John L |last2=Patterson |first2=David A |last3=Asanović |first3=Krste |last4=Bakos |first4=Jason D |last5=Colwell |first5=Robert P |last6=Bhattacharjee |first6=Abhishek |last7=Conte |first7=Thomas M |last8=Duato |first8=José |last9=Franklin |first9=Diana |last10=Goldberg |first10=David |last11=Jouppi |first11=Norman P |last12=Li |first12=Sheng |last13=Muralimanohar |first13=Naveen |last14=Peterson |first14=Gregory D |last15=Pinkston |first15=Timothy Mark |last16=Ranganathan |first16=Prakash |last17=Wood |first17=David Allen |last18=Young |first18=Clifford |last19=Zaky |first19=Amr |title=Computer Architecture: a Quantitative Approach |date=2011 |isbn=978-0128119051 |edition= Sixth |language=English|oclc=983459758 }}</ref> This design was intended to allow CPU cores to process faster despite the [[CAS latency|memory latency]] of [[computer data storage|main memory]] access. Accessing main memory can act as a bottleneck for [[computer performance|CPU core performance]] as the CPU waits for data, while making all of main memory high-speed may be prohibitively expensive. High-speed caches are a compromise allowing high-speed access to the data most-used by the CPU, permitting a faster [[clock rate|CPU clock]].<ref>{{Cite web|url=http://gec.di.uminho.pt/discip/minf/ac0102/0945CacheLevel.pdf|title=Cache: Why Level It|last=|first=|date=|website=|publisher=|access-date=}}</ref>
 
[[File:Cache Organization.png|thumb|right|429x429px|Generic multi-level cache organization|alt=Process architecture diagram showing four independent processors each linked through cache systems to main memory and input-output system.]]
Line 81:
A shared cache is a cache which can be accessed by multiple cores.<ref>Akanksha Jain; Calvin Lin; 2019. Cache Replacement Policies. Morgan & Claypool Publishers. p. 45. {{ISBN|978-1-68173-577-1}}.</ref> Since it is shared, each block in the cache is unique and therefore has a larger hit rate as there will be no duplicate blocks. However, data-access latency can increase as multiple cores try to access the same cache.<ref>David Culler; Jaswinder Pal Singh; Anoop Gupta; 1999. Parallel Computer Architecture: A Hardware/Software Approach. Gulf Professional Publishing. p. 436. {{ISBN|978-1-55860-343-1}}.</ref>
 
In [[multi-core processor]]s, the design choice to make a cache shared or private impacts the performance of the processor.<ref name="Keckler (2009)">Stephen W. Keckler; Kunle Olukotun; H. Peter Hofstee; 2009. Multicore Processors and Systems. Springer Science & Business Media. p. 182. {{ISBN|978-1-4419-0263-4}}.</ref> In practice, the upper-level cache L1 (or sometimes L2)<ref name=":2" /><ref name=":3" /> is implemented as private and lower-level caches are implemented as shared. This design provides high access rates for the high-level caches and low miss rates for the lower-level caches.<ref name="Keckler (2009)" />
 
== Recent implementation models ==