Memory access pattern: Difference between revisions

Content deleted Content added
m Disambiguating links to Object-orientation (link changed to Object-oriented programming) using DisamAssist.
Gather: capitalize acronym
Line 37:
In a [[gather (vector addressing)|gather]] memory access pattern, reads are randomly addressed or indexed, whilst the writes are sequential (or linear).<ref name="gpu gems2"/> An example is found in [[inverse texture mapping]], where data can be written out linearly across [[scan line]]s, whilst random access texture addresses are calculated per [[pixel]].
 
Compared to scatter, the disadvantage is that caching (and bypassing latencies) is now essential for efficient reads of small elements, however it is easier to parallelise since the writes are guaranteed to not overlap. As such the gather approach is more common for [[gpgpuGPGPU]] programming,<ref name="gpu gems"/> where the massive threading (enabled by parallelism) is used to hide read latencies.<ref name = "gpu gems">{{cite book|title = GPU gems|url=https://books.google.com/books?id=lGMzmbUhpiAC&q=scatter+memory+access+pattern&pg=PA51|isbn = 9780123849892|date = 2011-01-13| publisher=Elsevier }}deals with "scatter memory access patterns" and "gather memory access patterns" in the text</ref>
 
=== Combined gather and scatter ===