Cache prefetching: Difference between revisions

Content deleted Content added
SB.MCS (talk | contribs)
No edit summary
Tags: Reverted Visual edit
The translation belongs on fa.wikipedia.org, not here.
Line 1:
{{short description|Computer processing technique to boost memory performance}}
"این مقاله در حال ترجمه از ویکی انگلیسی است لطفا حذف نشود."
'''Cache prefetching''' is a technique used by computer processors to boost execution performance by fetching instructions or data from their original storage in slower memory to a faster local memory before it is actually needed (hence the term 'prefetch').<ref name=":3">{{Cite journal|last=Smith|first=Alan Jay|date=1982-09-01|title=Cache Memories|journal=ACM Comput. Surv.|volume=14|issue=3|pages=473–530|doi=10.1145/356887.356892|issn=0360-0300}}</ref> Most modern computer processors have fast and local [[Cache (computing)|cache memory]] in which prefetched data is held until it is required. The source for the prefetch operation is usually [[Computer data storage#Primary storage|main memory]]. Because of their design, accessing [[Cache (computing)|cache memories]] is typically much faster than accessing [[main memory]], so prefetching data and then accessing it from caches is usually many orders of magnitude faster than accessing it directly from [[Computer data storage#Primary storage|main memory]]. Prefetching can be done with non-blocking [[cache control instruction]]s.
 
'''واکشی حافظه نهان''' یک نکنیک پردازنده کامپیوتر برای تقویت قدرت اجرای برنامه ها است که دستورالعمل ها و داده ها را از حافظه اصلی (حافظه کندتر) به حافظه محلی (حافظه تندتر) قبل از آنکه واقعا مورد نیاز باشند واکشی می کند. (به اصطلاح به این عمل، پیش واکشی (prefetch) گفته می شود) اکثر پردازنده های مدرن دارای حافظه سریع محلی هستند که در آن داده های پیش واکشی شده تا زمان نیاز ذخیره می شوند.
 
The source for the prefetch operation is usually [[Computer data storage#Primary storage|main memory]]. Because of their design, accessing [[Cache (computing)|cache memories]] is typically much faster than accessing [[main memory]], so prefetching data and then accessing it from caches is usually many orders of magnitude faster than accessing it directly from [[Computer data storage#Primary storage|main memory]]. Prefetching can be done with non-blocking [[cache control instruction]]s.
 
== Data vs. instruction cache prefetching ==
Line 22 ⟶ 19:
 
=== Stream buffers ===
* Stream buffers were developed based on the concept of "one block lookahead (OBL) scheme" proposed by [[Alan Jay Smith]].<ref name=":3">{{Cite journal|last=Smith|first=Alan Jay|date=1982-09-01|title=Cache Memories|journal=ACM Comput. Surv.|volume=14|issue=3|pages=473–530|doi=10.1145/356887.356892|issn=0360-0300}}</ref>
* Stream [[Data buffer|buffers]] are one of the most common hardware based prefetching techniques in use. This technique was originally proposed by [[Norman Jouppi]] in 1990<ref name=":1">{{cite conference | last=Jouppi | first=Norman P. | title=Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers | publisher=ACM Press | ___location=New York, New York, USA | year=1990 | isbn=0-89791-366-3 | doi=10.1145/325164.325162 |citeseerx=10.1.1.37.6114}}</ref> and many variations of this method have been developed since.<ref>{{Cite journal|last1=Chen|first1=Tien-Fu|last2=Baer|first2=Jean-Loup|s2cid=1450745|date=1995-05-01|title=Effective hardware-based data prefetching for high-performance processors|journal=IEEE Transactions on Computers|volume=44|issue=5|pages=609–623|doi=10.1109/12.381947|issn=0018-9340}}</ref><ref>{{Cite conference|last1=Palacharla|first1=S.|last2=Kessler|first2=R. E.|date=1994-01-01|title=Evaluating Stream Buffers As a Secondary Cache Replacement|conference=21st Annual International Symposium on Computer Architecture|___location=Chicago, IL, USA|publisher=IEEE Computer Society Press|pages=24–33|doi= 10.1109/ISCA.1994.288164|isbn=978-0818655104|citeseerx=10.1.1.92.3031}}</ref><ref>{{cite journal| last1=Grannaes | first1=Marius | last2=Jahre | first2=Magnus | last3=Natvig | first3=Lasse | title=Storage Efficient Hardware Prefetching using Delta-Correlating Prediction Tables |citeseerx=10.1.1.229.3483 |journal=Journal of Instruction-Level Parallelism |issue=13 |year=2011 |pages=1–16}}</ref> The basic idea is that the [[cache miss]] address (and <math>k</math> subsequent addresses) are fetched into a separate buffer of depth <math>k</math>. This buffer is called a stream buffer and is separate from cache. The processor then consumes data/instructions from the stream buffer if the address associated with the prefetched blocks match the requested address generated by the program executing on the processor. The figure below illustrates this setup:
[[File:CachePrefetching_StreamBuffers.png|center|<ref name=":1"/> A typical stream buffer setup as originally proposed by Norman Jouppi in 1990|alt=A typical stream buffer setup as originally proposed|thumb|400x400px]]