This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
In computing, a memory access pattern is the pattern with which a system or program reads and writes memory. These patterns differ in the level of locality of reference and drastically affect cache performance,[1] and also have implications for the approach to parallelism and distribution of workload in shared memory systems. [2] [3] [4] Computer memory is usually described as 'random access', but traversals by software will still exhibit patterns that can be exploited for efficiency.
Examples
Sequential
The simplest extreme is the sequential access pattern , where data is read , processed, and written out with straightforward incremented/decremented addressing. These access patterns are highly amenable to prefetching.
Strided
Strided or simple 2D,3D access patterns (e.g. stepping through multi-dimensional arrays) are similarly easy to predict, and are found in implementations of linear algebra algorithms and image processing. Loop tiling is an effective approach.
Spatially coherent
In 3D rendering, access patterns for texture mapping and rasterization of small primitives (with arbitrary distortions of complex surfaces) are far from linear, but can still exhibit spatial locality (e.g. in screen space or texture space) . This can be turned into good memory locality via some combination of morton order and tiling for texture maps and frame buffer data (mapping spatial regions onto cache lines), or by sorting primitives via tile based deferred rendering.
Random
At the opposite extreme is a truly random memory access pattern. A few multiprocessor systems are specialised to deal with these. [5] The PGAS approach may help by sorting operations by data on the fly (useful when the problem *is* figuring out the locality of unsorted data).[6]
Data structures which rely heavily on pointer chasing can often produce poor locality of reference, although sorting can sometimes help.
Approaches
Data oriented design is an approach intended to maximise the locality of reference, by organising data according to how it is traversed in various stages of a program, contrasting with the more common object oriented approach.
Sometimes reads and writes exhibit different access patterns. A series of random writes may scatter data without needing to pull in cache lines, producing data that is easier to access sequentially in a subsequent step.
Alternatively an algorithm may randomly gather it's source data, whilst sequentially writing to it's destination. This foreknowledge can be exploited for parallelising, and is usually preferable for GPU programming.
- ^ "data oriented design" (PDF).
- ^ "enhancing cache coherent architectures with memory access patterns for embedded many-core systems" (PDF).
- ^ "gpgpu scatter vs gather".
- ^ "Analysis of Energy and Performance of Code Transformations for PGAS-based Data Access Patterns" (PDF).
{{cite web}}
: Cite has empty unknown parameter:|1=
(help) - ^ "Cray and HPCC: Benchmark Developments and Results from the Past Year (see global random access results for Cray X1)" (PDF).
{{cite web}}
: Cite has empty unknown parameter:|1=
(help); line feed character in|title=
at position 15 (help) - ^ "partitioned global address space programming".