Content deleted Content added
Typo fixing, replaced: other many → many other (2), typo(s) fixed: ’s → 's |
m Open access bot: hdl updated in citation with #oabot. |
||
(5 intermediate revisions by 5 users not shown) | |||
Line 24:
==XMT prototyping and links to more information==
In January 2007, a 64-processor computer <ref>University of Maryland, press release, June 26, 2007: [http://www.newsdesk.umd.edu/scitech/release.cfm?ArticleID=1459 "Maryland Professor Creates Desktop Supercomputer"] {{Webarchive|url=https://web.archive.org/web/20091214195046/http://www.newsdesk.umd.edu/scitech/release.cfm?ArticleID=1459 |date=2009-12-14 }}.</ref> named Paraleap,<ref>University of Maryland, A. James Clark School of Engineering, press release, November 28, 2007: [http://www.eng.umd.edu/media/pressreleases/pr112707_superwinner.html "Next Big "Leap" in Computing Technology Gets a Name"].</ref> that demonstrates the overall concept was completed. The XMT concept was presented in {{harvtxt|Vishkin|Dascal|Berkovich|Nuzman|1998}} and {{harvtxt|Naishlos|Nuzman|Tseng|Vishkin|2003}} and the XMT 64-processor computer in {{harvtxt|Wen|Vishkin|2008}}. Since making parallel programming easy is one of the biggest challenges facing computer science today, the demonstration also sought to include teaching the basics of PRAM algorithms and XMTC programming to students ranging from high-school {{harvtxt|Torbert|Vishkin|Tzur|Ellison|2010}} to graduate school.
Experimental work reported in {{harvtxt|Caragea|Vishkin|2011}} for the [[Maximum flow problem]], and in two papers by {{
XMT prototyping was culminated in {{harvtxt|Ghanim|Vishkin|Barua|2018}}, establishing that lock-step parallel programming (using ICE) can achieve the same performance as the fastest hand-tuned multi-threaded code on XMT systems. This 2018 result sharpens the contrast between XMT programming and the multi-threaded programming approaches employed by nearly all many other-core systems, whose race conditions and other demands tend to challenge, and sometimes even fail programmers {{harvtxt|Vishkin|2014}}.
Line 41:
| doi=10.1145/321812.321815
| citeseerx=10.1.1.100.9361
| s2cid=16416106
}}.
*{{Citation
Line 61 ⟶ 62:
| year = 1992
| isbn = 978-0-201-54856-3
}}
* {{Citation
Line 75:
| year = 2001
| isbn = 978-0-471-35351-5
}}
*{{Citation
Line 84 ⟶ 83:
| year = 2003
| title = Towards a First Vertical Prototyping of an Extremely Fine-Grained Parallel Programming Approach
| journal = Theory of
| volume = 36
| issue =5
| pages = 551–552
| doi =10.1007/s00224-003-1086-6
| s2cid = 1929495
| url=http://www.umiacs.umd.edu/users/vishkin/XMT/spaa01-j-03.pdf
}}.
Line 117:
| doi=
| contribution-url=http://www.umiacs.umd.edu/users/vishkin/XMT/spaa98.ps
}}.
* {{Citation
Line 140 ⟶ 139:
| url=http://www.umiacs.umd.edu/users/vishkin/XMT/CompFrontiers08.pdf
| isbn=9781605580777
| s2cid=11557669
}}.
*{{Citation
Line 149:
| journal=Communications of the ACM
| volume=54
| doi-access=
}}.
*{{Citation
Line 161:
| title-link=Symposium on Parallelism in Algorithms and Architectures
| isbn=9781450307437
| s2cid=5511743
}}.
*{{Citation
Line 168 ⟶ 169:
| contribution=Better speedups using simpler parallel programming for graph connectivity and biconnectivity
| pages=103–114
| year=
| doi=10.1145/2141702.2141714
| isbn=9781450312110
| s2cid=15095569
}}.
*{{Citation
Line 178 ⟶ 180:
| contribution=Brief announcement: speedups for parallel graph triconnectivity
| pages=190–192
| year=
| doi=10.1145/2312005.2312042
| title-link=Symposium on Parallelism in Algorithms and Architectures
| isbn=9781450312134
| s2cid=16908459
}}.
*{{Citation
Line 192 ⟶ 195:
| volume=57
| issue=4
| s2cid=30098719
}}.
*{{Citation
Line 206 ⟶ 210:
| hdl=1903/18521
| doi-access=free
| hdl-access=free
}}.
==Notes==
{{reflist
==External links==
|