Content deleted Content added
No edit summary |
|||
Line 5:
In [[computing]], a '''memory access pattern''' is the pattern with which a system or program reads and writes [[Memory (computing)|memory]]. These patterns differ in the level of [[locality of reference]] and drastically affect [[Cache (computing)|cache]] performance,<ref>{{cite web|title = data oriented design|url=http://www.dice.se/wp-content/uploads/2014/12/Introduction_to_Data-Oriented_Design.pdf}}</ref> and also have implications for the approach to [[parallelism (computing)|parallelism]] and distribution of workload in shared memory systems.
<ref>{{cite web|title=enhancing cache coherent architectures with memory access patterns for embedded many-core systems|url=http://www.cc.gatech.edu/~bader/papers/EnhancingCache-SoC12.pdf}}</ref>
<ref name="gpgpu gems">{{cite web|title=gpgpu scatter vs gather|url=http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter31.html}}</ref>
<ref>{{cite web|title=Analysis of Energy and Performance of Code Transformations for PGAS-based Data Access Patterns||url=http://nic.uoregon.edu/pgas14/papers/pgas14_submission_17.pdf}}</ref>
Computer memory is usually described as '[[random access memory|random access]]', but traversals by software will still exhibit patterns that can be exploited for efficiency.
|