Memory access pattern

This is an old revision of this page, as edited by Fmadd (talk | contribs) at 20:09, 11 June 2016. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computing, a memory access pattern is the pattern over time with which a system accesses memory. They differ in the level of locality of reference and drastically affect cache performance, and also have implications for the approach to parallelism in shared memory systems. Computer memory is usually described as 'random access', but traversals by software will still exhibit patterns that can be exploited for efficiency.


At one extreme is the sequential access pattern , where data is read , processed, and written in a straightforward incremental manner.

Strided or simple 2D access patterns (e.g. stepping through multi-dimensional arrays) are similarly easy to predict, and are found in implementations of linear algebra and image processing. Loop tiling is an effective approach.

In 3D rendering, access patterns for texture mapping and rasterization of small primitives (with arbitrary distortions of complex surfaces) are often far from linear, but still exhibit spatial locality . This can be turned into good memory locality via some combination of morton order and tiling for texture maps and frame buffer data, or by sorting primitives via tile based deferred rendering.

Data structures which rely heavily on pointer chasing can often produce poor locality of reference, although sorting can sometimes help.

At the opposite extreme is a truly random memory access pattern. A few multiprocessor systems are specialised to deal with these.

Data oriented design is an approach intended to maximise the locality of reference, by organising data according to how it is traversed in various stages of a program, contrasting with the more common object oriented approach.