=== Data oriented design ===
[[Data oriented design]] is an approach intended to maximise the locality of reference, by organising data according to how it is traversed in various stages of a program, contrasting with the more common [[object oriented]] approach.
=== random reads vs random writes ===
An task may have some inherent randomness, with alternative solutions for handling this either through reads or writes, with scatter or gather alternatives. These choices appear in GPGPU and graphics applications:-
==== sequential reads with scatter writes ====
The scatter approach may place less load on a cache hierarchy since a [[processing element]] may dispatch writes in a 'fire and forget' manner (bypassing a cache altogether), whilst using predicatble prefetching (or even DMA) for it's source data. However, it may be harder to parallelise since there is no guarantee the writes do not interact.
In the past, [[forward texture mapping]] attempted to handle the randomness with 'writes', whilst sequentially reading source texture information.
The [[playstation 2]] console used conventional inverse texture mapping, but handled any scatter/gather processing 'on-chip' using EDRAM, whilst 3D model (and a lot of texture data) from main memory was fed sequenatially by DMA. This is why it lacked of support for indexed primitives, and sometimes needed to manage textures 'up front' in the [[display list]].
==== gather reads with sequential writes ====
An algorithm may be re-worked such that the reads are random [[gather (vector addressing)|gather]] , whilst the writes are sequential. An example is found in [[inverse texture mapping]], where data can be written out linearly across scanlines, whilst random access texture addresses are calculated per pixel. The disadvantage is that caching (and bypassing latencies) is now essential for reads, however it is easier to parallelise since the writes are guaranteed to not overlap. As such the gather approach is more common for [[gpgpu]] programming<ref name="gpgpu gems"/>, where the massive threading is used to hide read latencies.
== See also ==
|