Draft:Untether AI: Difference between revisions

Content deleted Content added
Rhg2 (talk | contribs)
-- Draft creation using the WP:Article wizard --
 
Citation bot (talk | contribs)
Removed URL that duplicated identifier. Removed access-date with no URL. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 1028/1032
 
(23 intermediate revisions by 6 users not shown)
Line 1:
{{AFC submission|d|corp|u=2607:F2C0:B199:F700:4D70:201C:3E15:980E|ns=118|decliner=CanonNi|declinets=20250616113256|ts=20250616102449}}
{{AfC submission|t||ts=20250611234732|u=Rhg2|ns=118|demo=}}{{AFC comment|1=In accordance with Wikipedia's [[Wikipedia:Conflict of interest|Conflict of interest policy]], I disclose that I have a conflict of interest regarding the subject of this article. <!--Comment automatically added by the Article Wizard--> [[User:Rhg2|Rhg2]] ([[User talk:Rhg2|talk]]) 23:47, 11 June 2025 (UTC)}}
{{AFC submission|d|v|u=Rhg2|ns=118|decliner=KylieTastic|declinets=20250612215545|reason2=nn|small=yes|ts=20250612214818}}
{{AFC submission|d|corp|u=Rhg2|ns=118|decliner=CanonNi|declinets=20250612075226|small=yes|ts=20250612022829}}
{{AFC submission|d|corp|u=Rhg2|ns=118|decliner=CF-501 Falcon|declinets=20250612013544|small=yes|ts=20250611234941}}
 
{{AFC comment|1=Most sources are [[WP:ROUTINE|routine coverage]]. '''<small style="font-family:monospace">'​'​'[​[<big>[[User:CanonNi]]</big>]​]'​'​'</small>''' ([[User talk:CanonNi|💬]] • [[Special:Contribs/CanonNi|✍️]]) 11:32, 16 June 2025 (UTC)}}
 
{{AfC submission|t||ts=20250611234732|u=Rhg2|ns=118|demo=}}{{AFC comment|1=In accordance with Wikipedia's [[Wikipedia:Conflict of interest|Conflict of interest policy]], I disclose that I have a conflict of interest regarding the subject of this article. <!--Comment automatically added by the Article Wizard--> [[User:Rhg2|Rhg2]] ([[User talk:Rhg2|talk]]) 23:47, 11 June 2025 (UTC)}}
 
----
<!-- Important, do not remove anything above this line before article has been created. -->
Untether AI was a Canadian technology company that developed microchips and compilers for neural net processing. The at-memory compute architecture was built largely on standard silicon processes with some customization done on memory cells and processing elements. The $125 million dollars raised in Series B funding in July 2021<ref>{{cite web |title=Betakit Series B, July 2021 |url=https://betakit.com/intel-backed-untether-ai-raises-125-million-adds-cppib-tracker-capital-as-investors/ |access-date=11 June 2025}}</ref> led to a top MLPerf disclosure in August 2024 <ref>{{cite web |title=yahoo MLPerf, August 2024 |url=https://finance.yahoo.com/news/untether-ai-announces-speedai-accelerator-150000879.html |access-date=11 June 2025}}</ref> and a launch of its speedAI-240 product in October 2024 <ref>{{cite web |title=businesswire speedAI-240, October 2024|url=https://www.businesswire.com/news/home/20241028172679/en/Untether-AI-Ships-speedAI-240-Slim-Worlds-Fastest-Most-Energy-Efficient-AI-Inference-Accelerator-for-Cloud-to-Edge-Applications |access-date=11 June 2025}}</ref>. The MLPerf results indicated the at-memory architecture could achieve 3 to 6X the power efficiency of competing approaches. Despite the early success, Untether AI was shut down in June 2025 <ref>{{cite web |title=eetimes shut down, June 2025 |url=https://www.eetimes.com/untether-ai-shuts-down-engineering-team-joins-amd/ |access-date=11 June 2025}}</ref>.
 
{{Short description|added 2 more references with more citations as more evidence of notability}}
{{Draft topics|internet-culture|software|technology}}
{{AfC topic|org}}
 
'''Untether AI''' was a Canadian technology company that developed microchips and compilers for neural net processing. The patented computational memory<ref>{{cite book |first1=Duncan | last1=Elliott| last2=Snelgrove |first2=Martin| last3=Stumm |first3=Michael | chapter=Computational Ram: A Memory-simd Hybrid and Its Application to DSP|title=Proceedings of the IEEE Custom Integrated Circuits Conference |date=1992 | pages=30.6.1–30.6.4|doi=10.1109/CICC.1992.591879 | isbn=0-7803-0246-X}}</ref><ref>{{cite journal |last1=Snelgrove |first1=Martin |last2=Wiebe |first2=Darrick |title=Computational memory |journal=US Patent 11614947B2 |date=2023-03-28 |url=https://patents.google.com/patent/US11614947B2/en |access-date=15 June 2025}}</ref> architecture (also known as at-memory compute architecture<ref>{{cite book |first1=Bob|last1=Beachler |first2=Martin|last2=Snelgrove |chapter=Untether AI : Boqueria |title=2022 IEEE Hot Chips 34 Symposium (HCS) |date=2022 |pages=1–19 |doi=10.1109/HCS55958.2022.9895618 |isbn=978-1-6654-6028-6 }}</ref> ) was built largely on standard silicon processes with some customization done onof memory cells and processing elements. The $125 million dollars raised in Series B funding in July 2021<ref>{{cite webnews |title=BetakitIntel-backed SeriesUntether B,AI Julyraises 2021$125 million |url=https://betakit.com/intel-backed-untether-ai-raises-125-million-adds-cppib-tracker-capital-as-investors/ |publisher=betakit.com |date=20 July 2021 |access-date=11 June 2025}}</ref> led to a top MLPerf disclosure in August 2024<ref>{{cite news |title=New MLPerf Inference v4.1 Benchmark Results |date=28 August 2024 |url=https://mlcommons.org/2024/08/mlperf-inference-v4-1-results/ |publisher=mlcommons.org |access-date=12 June 2025}}</ref><ref>{{cite webnews |title=yahooAMD and Untether Take On Nvidia in MLPerf, Benchmarks |publisher=eetimes.com |url=https://www.eetimes.com/amd-and-untether-take-on-nvidia-in-mlperf-benchmarks/ |date=28 August 2024 |access-date=12 June 2025}}</ref><ref>{{cite news|title=Untether AI Announces speedAI Accelerator Cards|date=28 August 2024 |url=https://finance.yahoo.com/news/untether-ai-announces-speedai-accelerator-150000879.html |publisher=yahoo.com |access-date=11 June 2025}}</ref> and a launch of its speedAI-240 product in October 2024 .<ref>{{cite web |title=businesswireUntether AI Ships speedAI- 240, OctoberSlim 2024|url=https://www.businesswire.com/news/home/20241028172679/en/Untether-AI-Ships-speedAI-240-Slim-Worlds-Fastest-Most-Energy-Efficient-AI-Inference-Accelerator-for-Cloud-to-Edge-Applications |publisher=businesswire.com |date=October 2024 |access-date=11 June 2025}}</ref>. The MLPerf results indicated the at-memory architecture could achieve 33X to 6X the power efficiency of competing approaches. Despite the early success, Untether AI was shut down in June 2025 <ref>{{cite web |title=eetimesUntether shutAI down,Shuts June 2025Down |url=https://www.eetimes.com/untether-ai-shuts-down-engineering-team-joins-amd/ |date=June 2025 |publisher=eetimes.com |access-date=11 June 2025}}</ref>., with speculation as to the reasons for the shut down
<ref>{{cite web |title=Why did Untether AI fail |url=https://www.zach.be/p/why-did-untether-ai-fail |date=June 2025 |publisher=www.zach.be|access-date=15 July 2025}}</ref>.
== References ==
<!-- Inline citations added to your article will automatically display here. See en.wikipedia.org/wiki/WP:REFB for instructions on how to add citations. -->