Content deleted Content added
Fpeverelli (talk | contribs) |
Fpeverelli (talk | contribs) |
||
Line 18:
Since the reinassance of [[Machine learning|machine-learning]]-based artificial intelligence in the 2010s, several ___domain-specific architectures have been developed to accelerate inference for different forms of [[Artificial neural network|artificial neural networks]]. Some example are [[Google|Google's]] [[Tensor Processing Unit|TPU]], NVIDIA's NVDLA<ref>{{Cite web |title=NVDLA - Microarchitectures - Nvidia - WikiChip |url=https://en.wikichip.org/wiki/nvidia/microarchitectures/nvdla |access-date=2023-07-06 |website=en.wikichip.org |language=en}}</ref> and [[Arm (company)|ARM]]'s MLP<ref>{{Cite web |title=Machine Learning Processor (MLP) - Microarchitectures - ARM - WikiChip |url=https://en.wikichip.org/wiki/arm_holdings/microarchitectures/mlp |access-date=2023-07-06 |website=en.wikichip.org |language=en}}</ref>.
== Guidelines for DSA
[[John L. Hennessy|John Hennessy]] and [[David Patterson (computer scientist)|David Patterson]] outlined five principles for DSA design that lead to a better area efficiency and energy savings. The objective in these types of architecture is often also to reduce the Non-Recurring Engineering (NRE) costs, so that the investment in a specialized solution can be more easily amortized<ref name=":0" />.
|