Content deleted Content added
Line 21:
and deployment in devices such as self-driving cars.
<ref>{{cite web|title=nvidia introduces supercomputer for self driving cars|url=http://gas2.org/2016/01/06/nvidia-introduces-supercomputer-for-self-driving-cars/}}</ref>
However at least a factor of 10 in efficiency<ref>{{cite web|title=google boosts machine learning with TPU|url=http://techreport.com/news/30155/google-boosts-machine-learning-with-its-tensor-processing-unit}}mentions 10x efficiency</ref> ▼
▲However at least a factor of 10 in efficiency<ref>{{cite web|title=google boosts machine learning with TPU|url=http://techreport.com/news/30155/google-boosts-machine-learning-with-its-tensor-processing-unit}}mentions 10x efficiency</ref>
can still be gained with a more specific design. The [[memory access pattern]] of AI calculations differs from graphics, with more a more predictable but deeper [[dataflow]] (benefiting from the ability to keep more temporary variables on-chip); GPUs by contrast devote silicon to efficiently dealing with non-linear [[memory access pattern#GATHER_SCATTER|gather-scatter]] addressing (between textures and frame buffers), and [[texture filtering]], as needed for 3D rendering.
As of 2016, vendors are pushing their own terms, in the hope that their designs and [[API]]s will dominate. In the past after [[graphics accelerator]]s emerged, the industry eventually adopted [[NVidia]]s self assigned term "[[GPU]]" as the collective noun for "graphics accelerators", which had settled on an overall pipeline patterned around [[Direct3D]]. There is no consensus on the boundary between these devices, nor the exact form they will take, however several examples clearly aim to fill this new space.
|