Memory access pattern: Difference between revisions

Content deleted Content added
Gather: capitalize acronym
Random: fmt
 
Line 45:
=== Random ===
{{main|Random access}}
At the opposite extreme is a truly random memory access pattern. A few multiprocessor systems are specialised to deal with these.<ref>{{Cite conference |last=Wichmann |first=Nathan |date=2005 |title=Cray and HPCC : Benchmark Developments and Results from the Past Year |url=https://cug.org/5-publications/proceedings_attendee_lists/2005CD/S05_Proceedings/pages/Authors/Wichmann/Wichmann_paper.pdf |conference=CUG 2005 Proceedings}} see global random access results for Cray X1. vector architecture for hiding latencies, not so sensitive to cache coherency</ref> The [[Partitioned global address space|PGAS]] approach may help by sorting operations by data on the fly (useful when the problem *''is*'' figuring out the locality of unsorted data).<ref name="PGAS programming">{{Cite AV media |url=https://www.youtube.com/watch?v=NU4VfjISk2M |title=Partitioned Global Address Space Programming - Kathy Yelick |date=2013-09-05 |last=CITRIS and the Banatao Institute |access-date=2024-11-02 |via=YouTube}} covers cases where PGAS is a win, where data may not be already sorted, e.g., dealing with complex graphs - see "science across the irregularity spectrum".</ref> Data structures which rely heavily on [[pointer chasing]] can often produce poor [[locality of reference]], although sorting can sometimes help. Given a truly random memory access pattern, it may be possible to break it down (including scatter or gather stages, or other intermediate sorting) which may improve the locality overall; this is often a prerequisite for [[Parallel computing|parallelizing]].
 
== Approaches ==