Memory access pattern: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
Line 1:
In [[computing]], a '''memory access pattern''' is the pattern over time with which a system accesses [[Memory (computing)|memory]]. They differ in the level of [[locality of reference]] and drastically affect [[Cache (computing)|cache]] performance, and also have implications for the approach to [[parallelism]] in shared memory systems. Computer memory is usually described as 'random access', but traversals by software will still exhibit patterns that can be exploited for efficiency.
 
A memory access patterns differ in the level of [[locality of reference]] and drastically affect [[Cache (computing)|cache]] performance.
 
At one extreme is the [[sequential access]] pattern , where data is read , processed, and written in a straightforward incremental manner.
 
Strided or simple 2D access patterns (e.g. stepping through [[multi-dimensional array]]s) are similarly easy to predict, and are found in implementations of [[linear algebra]] and [[image processing]]. [[Loop tiling]] is an effective approach.
 
In [[3D rendering]], access patterns for [[texture mapping]] and [[rasterization]] of small primitives (with arbitrary distortions of complex surfaces) are often far from linear, but still exhibit spatial locality . This can be turned into good memory locality via some combination of [[morton order]] and [[Tiling (computer graphics)|tiling]] for [[texture map]]s and [[frame buffer]] data, or by sorting primitives via [[tile based deferred rendering]].
Line 11 ⟶ 10:
Data structures which rely heavily on [[pointer chasing]] can often produce poor locality of reference, although sorting can sometimes help.
 
At the opposite extreme is a truly random memory access pattern. A few multiprocessor systems are specialised to deal with these.