Draft:Untether AI: Difference between revisions

Content deleted Content added
Rhg2 (talk | contribs)
m fix grammar
Citation bot (talk | contribs)
Removed URL that duplicated identifier. Removed access-date with no URL. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 1028/1032
 
Line 14:
{{AfC topic|org}}
 
'''Untether AI''' was a Canadian technology company that developed microchips and compilers for neural net processing. The patented computational memory<ref>{{cite book |first1=Duncan | last1=Elliott| last2=Snelgrove |first2=Martin| last3=Stumm |first3=Michael | chapter=Computational Ram: A Memory-simd Hybrid and Its Application to DSP|title=Proceedings of the IEEE Custom Integrated Circuits Conference |date=1992 | pages=30.6.1–30.6.4|doi=10.1109/CICC.1992.591879 | isbn=0-7803-0246-X|chapter-url=https://ieeexplore.ieee.org/document/5727436 |access-date=15 June 2025}}</ref><ref>{{cite journal |last1=Snelgrove |first1=Martin |last2=Wiebe |first2=Darrick |title=Computational memory |journal=US Patent 11614947B2 |date=2023-03-28 |url=https://patents.google.com/patent/US11614947B2/en |access-date=15 June 2025}}</ref> architecture (also known as at-memory compute <ref>{{cite book |first1=Bob|last1=Beachler |first2=Martin|last2=Snelgrove |chapter=Untether AI : Boqueria |title=2022 IEEE Hot Chips 34 Symposium (HCS) |date=2022 |pages=1–19 |doi=10.1109/HCS55958.2022.9895618 |isbn=978-1-6654-6028-6 |chapter-url=https://ieeexplore.ieee.org/document/9895618|access-date=11 June 2025}}</ref> ) was built largely on standard silicon processes with some customization of memory cells and processing elements. The $125 million dollars raised in Series B funding in July 2021<ref>{{cite news |title=Intel-backed Untether AI raises $125 million |url=https://betakit.com/intel-backed-untether-ai-raises-125-million-adds-cppib-tracker-capital-as-investors/ |publisher=betakit.com |date=20 July 2021 |access-date=11 June 2025}}</ref> led to a top MLPerf disclosure in August 2024<ref>{{cite news |title=New MLPerf Inference v4.1 Benchmark Results |date=28 August 2024 |url=https://mlcommons.org/2024/08/mlperf-inference-v4-1-results/ |publisher=mlcommons.org |access-date=12 June 2025}}</ref><ref>{{cite news |title=AMD and Untether Take On Nvidia in MLPerf Benchmarks |publisher=eetimes.com |url=https://www.eetimes.com/amd-and-untether-take-on-nvidia-in-mlperf-benchmarks/ |date=28 August 2024 |access-date=12 June 2025}}</ref><ref>{{cite news|title=Untether AI Announces speedAI Accelerator Cards|date=28 August 2024 |url=https://finance.yahoo.com/news/untether-ai-announces-speedai-accelerator-150000879.html |publisher=yahoo.com |access-date=11 June 2025}}</ref> and a launch of its speedAI-240 product in October 2024.<ref>{{cite web |title=Untether AI Ships speedAI 240 Slim |url=https://www.businesswire.com/news/home/20241028172679/en/Untether-AI-Ships-speedAI-240-Slim-Worlds-Fastest-Most-Energy-Efficient-AI-Inference-Accelerator-for-Cloud-to-Edge-Applications |publisher=businesswire.com |date=October 2024 |access-date=11 June 2025}}</ref> The MLPerf results indicated the at-memory architecture could achieve 3X to 6X the power efficiency of competing approaches. Despite the early success, Untether AI was shut down in June 2025<ref>{{cite web |title=Untether AI Shuts Down |url=https://www.eetimes.com/untether-ai-shuts-down-engineering-team-joins-amd/ |date=June 2025 |publisher=eetimes.com |access-date=11 June 2025}}</ref>, with speculation as to the reasons for the shut down
<ref>{{cite web |title=Why did Untether AI fail |url=https://www.zach.be/p/why-did-untether-ai-fail |date=June 2025 |publisher=www.zach.be|access-date=15 July 2025}}</ref>.
== References ==