Draft:Untether AI: Difference between revisions

Content deleted Content added
Declining submission: corp - Submission is about a company or organization not yet shown to meet notability guidelines (AFCH)
Citation bot (talk | contribs)
Removed URL that duplicated identifier. Removed access-date with no URL. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 1028/1032
 
(16 intermediate revisions by 5 users not shown)
Line 1:
{{AFC submission|d|corp|u=Rhg22607:F2C0:B199:F700:4D70:201C:3E15:980E|ns=118|decliner=CanonNi|declinets=2025061207522620250616113256|ts=2025061202282920250616102449}} <!-- Do not remove this line! -->
{{AFC submission|d|corpv|u=Rhg2|ns=118|decliner=CF-501 FalconKylieTastic|declinets=2025061201354420250612215545|reason2=nn|small=yes|ts=2025061123494120250612214818}} <!-- Do not remove this line! -->
{{AFC submission|d|corp|u=Rhg2|ns=118|decliner=CanonNi|declinets=20250612075226|small=yes|ts=20250612022829}}
{{AFC submission|d|corp|u=Rhg2|ns=118|decliner=CF-501 Falcon|declinets=20250612013544|small=yes|ts=20250611234941}}
 
{{AFC comment|1=Most sources are [[WP:ROUTINE|routine coverage]]. '''<small style="font-family:monospace">'​'​'[​[<big>[[User:CanonNi]]</big>]​]'​'​'</small>''' ([[User talk:CanonNi|💬]] • [[Special:Contribs/CanonNi|✍️]]) 11:32, 16 June 2025 (UTC)}}
 
{{AFC comment|1=In accordance with Wikipedia's [[Wikipedia:Conflict of interest|Conflict of interest policy]], I disclose that I have a conflict of interest regarding the subject of this article. <!--Comment automatically added by the Article Wizard--> [[User:Rhg2|Rhg2]] ([[User talk:Rhg2|talk]]) 23:47, 11 June 2025 (UTC)}}
Line 6 ⟶ 10:
----
 
{{Short description|added 2 more references with more citations as more evidence of notability}}
{{Draft topics|internet-culture|software|computing|technology}}
{{AfC topic|org}}
 
'''Untether AI''' was a Canadian technology company that developed microchips and compilers for neural net processing. The patented computational memory<ref>{{cite book |first1=Duncan | last1=Elliott| last2=Snelgrove |first2=Martin| last3=Stumm |first3=Michael | chapter=Computational Ram: A Memory-simd Hybrid and Its Application to DSP|title=Proceedings of the IEEE Custom Integrated Circuits Conference |date=1992 | pages=30.6.1–30.6.4|doi=10.1109/CICC.1992.591879 | isbn=0-7803-0246-X}}</ref><ref>{{cite journal |last1=Snelgrove |first1=Martin |last2=Wiebe |first2=Darrick |title=Computational memory |journal=US Patent 11614947B2 |date=2023-03-28 |url=https://patents.google.com/patent/US11614947B2/en |access-date=15 June 2025}}</ref> architecture (also known as at-memory compute architecture.<ref>{{cite journalbook |first1=Bob|last1=Beachler, |first2=Martin|last2=Snelgrove |titlechapter=Untether AiAI : Boqueria |journaltitle=2022 IEEE Hot Chips 34 Symposium (HCS) |date=2022 |pagepages=1-191–19 |urldoi=https:10.1109//ieeexploreHCS55958.ieee2022.org/document/9895618 |isbn=978-1-6654-6028-6 }}</ref> ) was built largely on standard silicon processes with some customization done onof memory cells and processing elements. The $125 million dollars raised in Series B funding in July 2021.<ref>{{cite webnews |title=BetakitIntel-backed SeriesUntether B,AI Julyraises 2021$125 million |url=https://betakit.com/intel-backed-untether-ai-raises-125-million-adds-cppib-tracker-capital-as-investors/ |publisher=betakit.com |date=20 July 2021 |access-date=11 June 2025}}</ref> led to a top MLPerf disclosure in August 2024 <ref>{{cite webnews |title=mlcommonsNew inferenceMLPerf results,Inference Augv4.1 Benchmark Results |date=28 August 2024 |url=https://mlcommons.org/2024/08/mlperf-inference-v4-1-results/ |publisher=mlcommons.org |access-date=12 June 2025}}</ref><ref>{{cite news |last1title=Ward-FoxtonAMD |first1=Sallyand |title=EETimesUntether Take On Nvidia in MLPerf Benchmarks |publisher=eetimes.com |url=https://www.eetimes.com/amd-and-untether-take-on-nvidia-in-mlperf-benchmarks/ |date=28 August 2024 |access-date=12 June 2025}}</ref><ref>{{cite web news|title=yahooUntether MLPerf,AI Announces speedAI Accelerator Cards|date=28 August 2024 |url=https://finance.yahoo.com/news/untether-ai-announces-speedai-accelerator-150000879.html |publisher=yahoo.com |access-date=11 June 2025}}</ref> and a launch of its speedAI-240 product in October 2024 .<ref>{{cite web |title=businesswireUntether AI Ships speedAI- 240, OctoberSlim 2024|url=https://www.businesswire.com/news/home/20241028172679/en/Untether-AI-Ships-speedAI-240-Slim-Worlds-Fastest-Most-Energy-Efficient-AI-Inference-Accelerator-for-Cloud-to-Edge-Applications |publisher=businesswire.com |date=October 2024 |access-date=11 June 2025}}</ref>. The MLPerf results indicated the at-memory architecture could achieve 33X to 6X the power efficiency of competing approaches. Despite the early success, Untether AI was shut down in June 2025 <ref>{{cite web |title=eetimesUntether shutAI down,Shuts June 2025Down |url=https://www.eetimes.com/untether-ai-shuts-down-engineering-team-joins-amd/ |date=June 2025 |publisher=eetimes.com |access-date=11 June 2025}}</ref>, with speculation as to the reasons for the shut down
<!-- Important, do not remove anything above this line before article has been created. -->
<ref>{{cite web |title=Why did Untether AI fail |url=https://www.zach.be/p/why-did-untether-ai-fail |date=June 2025 |publisher=www.zach.be|access-date=15 July 2025}}</ref>.
Untether AI was a Canadian technology company that developed microchips and compilers for neural net processing. The at-memory compute architecture.<ref>{{cite journal |last1=Beachler, Snelgrove |title=Untether Ai : Boqueria |journal=IEEE Hot Chips 34 Symposium |date=2022 |page=1-19 |url=https://ieeexplore.ieee.org/document/9895618}}</ref> was built largely on standard silicon processes with some customization done on memory cells and processing elements. The $125 million dollars raised in Series B funding in July 2021.<ref>{{cite web |title=Betakit Series B, July 2021 |url=https://betakit.com/intel-backed-untether-ai-raises-125-million-adds-cppib-tracker-capital-as-investors/ |access-date=11 June 2025}}</ref> led to a top MLPerf disclosure in August 2024 <ref>{{cite web |title=mlcommons inference results, Aug. 2024 |url=https://mlcommons.org/2024/08/mlperf-inference-v4-1-results/ |access-date=12 June 2025}}</ref><ref>{{cite news |last1=Ward-Foxton |first1=Sally |title=EETimes MLPerf |url=https://www.eetimes.com/amd-and-untether-take-on-nvidia-in-mlperf-benchmarks/ |access-date=12 June 2025}}</ref><ref>{{cite web |title=yahoo MLPerf, August 2024 |url=https://finance.yahoo.com/news/untether-ai-announces-speedai-accelerator-150000879.html |access-date=11 June 2025}}</ref> and a launch of its speedAI-240 product in October 2024 <ref>{{cite web |title=businesswire speedAI-240, October 2024|url=https://www.businesswire.com/news/home/20241028172679/en/Untether-AI-Ships-speedAI-240-Slim-Worlds-Fastest-Most-Energy-Efficient-AI-Inference-Accelerator-for-Cloud-to-Edge-Applications |access-date=11 June 2025}}</ref>. The MLPerf results indicated the at-memory architecture could achieve 3 to 6X the power efficiency of competing approaches. Despite the early success, Untether AI was shut down in June 2025 <ref>{{cite web |title=eetimes shut down, June 2025 |url=https://www.eetimes.com/untether-ai-shuts-down-engineering-team-joins-amd/ |access-date=11 June 2025}}</ref>
 
== References ==
<!-- Inline citations added to your article will automatically display here. See en.wikipedia.org/wiki/WP:REFB for instructions on how to add citations. -->