Content deleted Content added
Hervegirod (talk | contribs) |
Hervegirod (talk | contribs) |
||
Line 124:
| accessdate=2020-04-08
}}</ref>
The Tensor Cores use CUDA [[Warp (CUDA)|Warp]]-Level Primitives on 32 parallel threads to take advantage of their parallel architecture.<ref>
{{cite web
| url=https://devblogs.nvidia.com/using-cuda-warp-level-primitives/
| title=Using CUDA Warp-Level Primitives
| publisher=[[Nvidia]]
| date=2018-01-15
| accessdate=2020-04-08
| quote=''NVIDIA GPUs execute groups of threads known as warps in SIMT (Single Instruction, Multiple Thread) fashion''
}}
</ref>
==See also==
|