Content deleted Content added
translate Tags: Reverted Visual edit |
LucasBrown (talk | contribs) No edit summary |
||
(41 intermediate revisions by 20 users not shown) | |||
Line 1:
{{short description|Computer processing technique to boost memory performance}}
'''
▲'''واکشی''' حافظه نهان یک نکنیک پردازنده کامپیوتر برای تقویت قدرت اجرای برنامه ها است که دستورالعمل ها و داده ها را از حافظه اصلی (حافظه کندتر) به حافظه محلی (حافظه تندتر) قبل از آنکه واقعا مورد نیاز باشند واکشی می کند.<ref name=":3">{{Cite journal|last=Smith|first=Alan Jay|date=1982-09-01|title=Cache Memories|journal=ACM Comput. Surv.|volume=14|issue=3|pages=473–530|doi=10.1145/356887.356892|issn=0360-0300}}</ref> Most modern computer processors have fast and local [[Cache (computing)|cache memory]] in which prefetched data is held until it is required. The source for the prefetch operation is usually [[Computer data storage#Primary storage|main memory]]. Because of their design, accessing [[Cache (computing)|cache memories]] is typically much faster than accessing [[main memory]], so prefetching data and then accessing it from caches is usually many orders of magnitude faster than accessing it directly from [[Computer data storage#Primary storage|main memory]]. Prefetching can be done with non-blocking [[cache control instruction]]s.
== Data vs. instruction cache prefetching ==
Line 7 ⟶ 6:
* '''Data prefetching''' fetches data before it is needed. Because data access patterns show less regularity than instruction patterns, accurate data prefetching is generally more challenging than instruction prefetching.
* '''Instruction prefetching''' fetches instructions before they need to be executed. The first mainstream microprocessors to use some form of instruction prefetch were the [[Intel]] [[8086]] (six bytes) and the [[Motorola]] [[68000]] (four bytes). In recent years,{{When|date=September 2025}} all high-performance processors use prefetching techniques.▼
▲* '''Instruction prefetching''' fetches instructions before they need to be executed. The first mainstream microprocessors to use some form of instruction prefetch were the [[Intel]] [[8086]] (six bytes) and the [[Motorola]] [[68000]] (four bytes). In recent years, all high-performance processors use prefetching techniques.
== Hardware vs. software cache prefetching ==
Cache prefetching can be accomplished either by hardware or by software.<ref name=":2" />
* '''Hardware
* '''Software
▲* '''Software based prefetching''' is typically accomplished by having the compiler analyze the code and insert additional "prefetch" instructions in the program during compilation itself.<ref>{{Cite thesis|last=Kennedy|first=Porterfield, Allan|title=Software methods for improvement of cache performance on supercomputer applications|date=1989-01-01|publisher=Rice University|hdl=1911/19069}}</ref>
== Methods of hardware prefetching ==
Line 21 ⟶ 18:
=== Stream buffers ===
* Stream buffers were developed based on the concept of "one block lookahead (OBL) scheme" proposed by [[Alan Jay Smith]].<ref name=":3" />
* Stream [[Data buffer|buffers]] are one of the most common hardware based prefetching techniques in use. This technique was originally proposed by [[Norman Jouppi]] in 1990,<ref name=":1">{{cite conference |
[[File:CachePrefetching_StreamBuffers.png|center|
* Whenever the prefetch mechanism detects a miss on a memory block, say ''A'', it allocates a stream to begin prefetching successive blocks from the missed block onward. If the stream buffer can hold 4 blocks, then
* This mechanism can be scaled up by adding multiple such
* The ideal depth of the stream buffer
=== Strided prefetching ===
This type of prefetching monitors the delta between the addresses of the memory accesses and looks for patterns within it.
==== Regular strides ====
In this pattern, consecutive memory accesses are made to blocks that are ''s'' addresses apart.<ref name=":2" /><ref>{{Cite conference |last1=Kondguli |first1=Sushant |last2=Huang |first2=Michael |date=November 2017 |title=T2: A Highly Accurate and Energy Efficient Stride Prefetcher |conference=2017 IEEE International Conference on Computer Design (ICCD) |pages=373–376 |doi=10.1109/ICCD.2017.64|isbn=978-1-5386-2254-4 |s2cid=11055312 }}</ref> In this case, the prefetcher calculates the ''s'' and uses it to compute the memory address for prefetching. For example, if {{Nowrap|1=''s'' = 4}}, the address to be prefetched would ''A''+4.
==== Irregular spatial strides ====
In this case, the delta between the addresses of consecutive memory accesses is variable but still follows a pattern. Some prefetcher designs<ref name="grannaes" /><ref>{{Cite conference |last1=Shevgoor |first1=Manjunath |last2=Koladiya |first2=Sahil |last3=Balasubramonian |first3=Rajeev |last4=Wilkerson |first4=Chris |last5=Pugsley |first5=Seth H. |last6=Chishti |first6=Zeshan |date=December 2015 |title=Efficiently prefetching complex address patterns |url=https://ieeexplore.ieee.org/document/7856594 |conference=2015 48th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) |pages=141–152 |doi=10.1145/2830772.2830793 |isbn=9781450340342 |s2cid=15294463}}</ref><ref>{{Cite conference |last1=Kim |first1=Jinchun |last2=Pugsley |first2=Seth H. |last3=Gratz |first3=Paul V. |last4=Reddy |first4=A.L. Narasimha |last5=Wilkerson |first5=Chris |last6=Chishti |first6=Zeshan |date=October 2016 |title=Path confidence based lookahead prefetching |conference=2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) |pages=1–12 |doi=10.1109/MICRO.2016.7783763|isbn=978-1-5090-3508-3 |s2cid=1097472 }}</ref> exploit this property to predict and prefetch for future accesses.
==== Irregular temporal prefetching ====
This class of prefetchers looks for memory access streams that repeat over time.<ref>{{Cite conference |last1=Joseph |first1=Doug |last2=Grunwald |first2=Dirk |date=1997-05-01 |title=Prefetching using Markov predictors |url=https://doi.org/10.1145/264107.264207 |conference=ISCA 1997 |series=ISCA 1997 |___location=New York, New York, USA |publisher=Association for Computing Machinery |pages=252–263 |doi=10.1145/264107.264207 |isbn=978-0-89791-901-2 |book-title=Proceedings of the 24th Annual International Symposium on Computer Architecture |s2cid=434419}}</ref><ref>{{Cite conference |last1=Collins |first1=J. |last2=Sair |first2=S. |last3=Calder |first3=B. |last4=Tullsen |first4=D.M. |date=November 2002 |title=Pointer cache assisted prefetching |conference=35th Annual IEEE/ACM International Symposium on Microarchitecture, 2002. (MICRO-35). Proceedings. |pages=62–73 |doi=10.1109/MICRO.2002.1176239|isbn=0-7695-1859-1 |s2cid=5608519 }}</ref> For example, in the stream of memory accesses N, A, B, C, E, G, H, A, B, C, I, J, K, A, B, C, L, M, N, O, A, B, C, ...; the stream A, B, C is repeating over time. Other design variations have tried to provide more efficient implementations.<ref>{{Cite conference |last1=Jain |first1=Akanksha |last2=Lin |first2=Calvin |date=2013-12-07 |title=Linearizing irregular memory accesses for improved correlated prefetching |url=https://doi.org/10.1145/2540708.2540730 |conference=MICRO-46 |___location=New York, New York, USA |publisher=Association for Computing Machinery |pages=247–259 |doi=10.1145/2540708.2540730 |isbn=978-1-4503-2638-4 |book-title=Proceedings of the 46th Annual IEEE/ACM International Symposium on Microarchitecture |s2cid=196831|url-access=subscription }}</ref><ref>{{Cite web |title=Making Temporal Prefetchers Practical: The MISB Prefetcher – Research Articles – Arm Research – Arm Community |url=https://community.arm.com/arm-research/b/articles/posts/making-temporal-prefetchers-practical--the-misb-prefetcher |access-date=2022-03-16 |website=community.arm.com |date=24 June 2019 |language=en-us}}</ref>
=== Collaborative prefetching ===
Computer applications generate a variety of access patterns. The processor and memory subsystem architectures used to execute these applications further disambiguate the memory access patterns they generate. Hence, the effectiveness and efficiency of prefetching schemes often depends on the application and the architectures used to execute them.<ref>{{Cite journal |last1=Kim |first1=Jinchun |last2=Teran |first2=Elvira |last3=Gratz |first3=Paul V. |last4=Jiménez |first4=Daniel A. |last5=Pugsley |first5=Seth H. |last6=Wilkerson |first6=Chris |date=2017-05-12 |title=Kill the Program Counter: Reconstructing Program Behavior in the Processor Cache Hierarchy |journal=ACM SIGPLAN Notices |language=en |volume=52 |issue=4 |pages=737–749 |doi=10.1145/3093336.3037701 |issn=0362-1340|doi-access=free }}</ref> Recent research<ref>{{Cite conference |last1=Kondguli |first1=Sushant |last2=Huang |first2=Michael |date=2018-06-02 |title=Division of labor: a more effective approach to prefetching |url=https://doi.org/10.1109/ISCA.2018.00018 |book-title=Proceedings of the 45th Annual International Symposium on Computer Architecture |conference=ISCA '18 |___location=Los Angeles, California |publisher=IEEE Press |pages=83–95 |doi=10.1109/ISCA.2018.00018 |isbn=978-1-5386-5984-7|s2cid=50777324 |url-access=subscription }}</ref><ref>{{Cite conference |last1=Pakalapati |first1=Samuel |last2=Panda |first2=Biswabandan |date=May 2020 |title=Bouquet of Instruction Pointers: Instruction Pointer Classifier-based Spatial Hardware Prefetching |conference=2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA) |pages=118–131 |doi=10.1109/ISCA45697.2020.00021|isbn=978-1-7281-4661-4 |s2cid=218683672 }}</ref> has focused on building collaborative mechanisms to synergistically use multiple prefetching schemes for better prefetching coverage and accuracy.
== Methods of software prefetching ==
=== Compiler
Compiler
These prefetches are non-blocking memory operations
One main advantage of software prefetching is that it reduces the number of compulsory cache misses.<ref name=":2" />
The following example shows
Consider a for
for (int i=0; i<1024; i++) {
array1[i] = 2 * array1[i];
}
</syntaxhighlight>At each iteration, the ''i''<sup>th</sup> element of the array
for (int i=0; i<1024; i++) {
prefetch (array1 [i + k]);
array1[i] = 2 * array1[i];
}
</syntaxhighlight>Here, the prefetch stride, <
== Comparison of hardware and software prefetching ==
* While software prefetching requires programmer or [[compiler]] intervention, hardware prefetching requires special hardware mechanisms.<ref name=":2" />
* Software prefetching works well only with loops where there is regular array access, as the programmer has to hand
* Hardware prefetching also has less CPU overhead when compared to software prefetching.<ref>{{Cite conference |last1=Callahan |first1=David |last2=Kennedy |first2=Ken |last3=Porterfield |first3=Allan |date=1991-01-01 |title=Software Prefetching |conference=Fourth International Conference on Architectural Support for Programming Languages and Operating Systems |___location=Santa Clara,
| date=2012| volume=9| pages=1–29}}</ref>
== Metrics of cache prefetching ==
There are three main metrics to judge cache prefetching<ref name=":2">{{Cite book |last=Solihin |first=Yan |title=Fundamentals of parallel multicore architecture
=== Coverage ===
Line 64 ⟶ 75:
<math>Coverage = \frac{\text{Cache Misses eliminated by Prefetching}}{\text{Total Cache Misses}}</math>,
where
=== Accuracy ===
Accuracy is the fraction of total prefetches that were useful
<math>\text{Prefetch Accuracy} = \frac{\text{Cache Misses eliminated by prefetching}}{(\text{Useless Cache Prefetches}) + (\text{Cache Misses eliminated by prefetching})}</math>
While it appears that having perfect accuracy might imply that there are no misses, this is not the case. The prefetches themselves might result in new misses if the prefetched blocks are placed directly into the cache. Although these may be a small fraction of the total number of misses
=== Timeliness ===
The qualitative definition of timeliness is how early a block is prefetched versus when it is actually referenced. An example to further explain timeliness is as follows
Consider a for loop where each iteration takes 3 cycles to execute and the
== See also ==
|