Cache (computing): Difference between revisions

Content deleted Content added
m A derived definition of CPU cache with sources sited
Tags: Reverted Visual edit
Restored revision 1231688999 by Johnnie Bob (talk): Redundant
Line 62:
{{Main|CPU cache}}
Small memories on or close to the CPU can operate faster than the much larger [[main memory]].<ref>{{Cite journal|last1=Su|first1=Chao|last2=Zeng|first2=Qingkai|date=2021-06-10|editor-last=Nicopolitidis|editor-first=Petros|title=Survey of CPU Cache-Based Side-Channel Attacks: Systematic Analysis, Security Models, and Countermeasures|journal=Security and Communication Networks|language=en|volume=2021|pages=1–15|doi=10.1155/2021/5559552|issn=1939-0122|doi-access=free}}</ref> Most CPUs since the 1980s have used one or more caches, sometimes [[CPU cache#Multi-level caches|in cascaded levels]]; modern high-end [[Embedded computing|embedded]], [[Desktop computer|desktop]] and server [[microprocessor]]s may have as many as six types of cache (between levels and functions).<ref>{{cite web|title=Intel Broadwell Core i7 5775C '128MB L4 Cache' Gaming Behemoth and Skylake Core i7 6700K Flagship Processors Finally Available In Retail|date=25 September 2015|url=https://wccftech.com/intel-broadwell-core-i7-5775c-128mb-l4-cache-and-skylake-core-i7-6700k-flagship-processors-available-retail/}} Mentions L4 cache. Combined with separate I-Cache and TLB, this brings the total 'number of caches (levels+functions) to 6.</ref> Some examples of caches with a specific function are the [[D-cache]], [[I-cache]] and the [[translation lookaside buffer]] for the [[memory management unit]] (MMU).
 
In computer architecture, a cache is a high-speed access area that stores copies of data or instructions likely to be needed soon by the CPU (Central Processing Unit). This temporary storage is located closer to the CPU than the main memory (RAM), enabling quicker access times.
 
A CPU cache serves to reduce the time required to access data from the main memory by storing frequently accessed data and instructions. It operates on the principle of locality, exploiting both spatial (data stored close to other accessed data) and temporal (reused data) locality to improve performance.
 
Hennessy, J. L., & Patterson, D. A. (2017). ''Computer Architecture: A Quantitative Approach'' (6th ed.). Morgan Kaufmann Publishers.
 
Patterson, D. A., & Hennessy, J. L. (2020). ''Computer Organization and Design: The Hardware/Software Interface'' (6th ed.). Morgan Kaufmann Publishers.
 
==={{Anchor|GPU}}GPU cache===