Content deleted Content added
→Volta series: Web archive link for font, this font is not dead, it opens, it just takes a while. |
|||
Line 1:
{{Short description|none}}
{{Use dmy dates|date=April 2024}}
This list contains general information about [[graphics processing unit]]s (GPUs) and [[video card]]s from [[Nvidia]], based on official specifications. In addition some [[Comparison of Nvidia nForce chipsets|Nvidia motherboards]] come with integrated onboard GPUs. Limited/special/collectors' editions or AIB versions are not included.
{{toclimit|3}}
==Field explanations==
The fields in the table listed below describe the following:
*
* ''Launch''
*
* ''
* ''
* ''
* ''SM Count'' – Number of streaming multiprocessors.<ref name="SMCount-motherboardsdotorg">{{cite web |url=http://www.motherboards.org/reviews/hardware/2038_5.html |title=Nvidia GeForce GTX 480 Video Card Review :: Streaming Multiprocessor |website=Motherboards.org |date=March 26, 2010 |author=Benjamin Sun |access-date=March 14, 2012 |archive-url=https://web.archive.org/web/20111220040904/http://www.motherboards.org/reviews/hardware/2038_5.html |archive-date=December 20, 2011 |url-status=live }}</ref>
* ''
* ''Memory clock'' – The factory effective memory clock frequency (while some manufacturers adjust clocks lower and higher, this number will always be the reference clocks used by Nvidia). All DDR/GDDR memories operate at half this frequency, except for GDDR5, which operates at one quarter of this frequency.
* ''Core config'' – The layout of the graphics pipeline, in terms of functional units. Over time the number, type, and variety of functional units in the GPU core has changed significantly; before each section in the list there is an explanation as to what functional units are present in each generation of processors. In later models, shaders are integrated into a unified shader architecture, where any one shader can perform any of the functions listed.
* ''[[Fillrate]]'' – Maximum theoretical fill rate in textured pixels per second. This number is generally used as a ''maximum throughput number'' for the GPU and generally, a higher fill rate corresponds to a more powerful (and faster) GPU.
* ''Memory subsection''
**''[[Bandwidth (computing)|Bandwidth]]'' – Maximum theoretical bandwidth for the processor at factory clock with factory bus width. GHz = 10{{sup|9}} Hz.
** ''
** ''Bus width'' – Maximum bit width of the memory bus or buses used. This will always be a factory bus width.
* ''API support section''
**
** ''[[OpenGL]]'' – Maximum version of OpenGL fully supported.
**[[OpenCL]] – Maximum version of OpenCL fully supported.
**[[Vulkan (API)|Vulkan]] – Maximum version of Vulkan fully supported.
**[[CUDA]] - Maximum version of Cuda fully supported.
* ''Features'' – Added features that are not standard as a part of the two graphics libraries.
==Desktop GPUs==
===Pre-GeForce===
{{unreferenced section|date=May 2024}}
{{further|Fahrenheit (microarchitecture)}}
{{sort-under}}
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" style="vertical-align: bottom"|Model{{spaces}}name
! rowspan="2" style="vertical-align: bottom"|Launch
! rowspan="2" |[[Code name]]
! rowspan="2" |[[Semiconductor device fabrication|Fab]] ([[nanometer|nm]])<ref name="vintage3d">{{cite web |title=3D accelerator database |url=http://vintage3d.org/dbn.php |website=Vintage 3D |access-date=21 July 2019 |archive-url=https://web.archive.org/web/20181023222614/http://www.vintage3d.org/dbn.php |archive-date=23 October 2018 |url-status=live}}</ref>
! rowspan="2" |Transistors (million)
! rowspan="2" |Die size (mm<sup>2</sup>)
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" |Core clock ([[Hertz|MHz]])
! rowspan="2" |Memory clock ([[Hertz|MHz]])
! rowspan="2" |Core config{{efn|name=geforce 256 1|[[Pixel pipeline]]s: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" |Memory
! colspan="4" |[[Fillrate]]
! colspan="2" |Latest{{spaces}}[[Application programming interface|API]]{{spaces}}support
! rowspan="2" |TDP (Watts)
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! MOps/s
! MPixels/s
! MTexels/s
! MVertices/s
! [[Direct3D]]
! [[OpenGL]]
|-
! style="text-align:left
| {{Date table sorting|May 22, 1995}}
|
| [[STMicroelectronics|SGS]]<br />[[600 nm process|500 nm]]
| 1<ref>{{Cite web|url=https://www.techpowerup.com/gpu-specs/nvidia-nv1.g596|title=NVIDIA NV1 GPU Specs {{pipe}} TechPowerUp GPU Database |date=25 August 2023}}</ref>
|
|
|
|
| rowspan="3" |1:1:1
|
| 0.48
| FPM<br/>EDO<br />VRAM
| 64
| 75
| 75
| 75
| rowspan="10" |0
| 1.0
| n/a
|
|-
! style="text-align:left
| {{Date table sorting|August 25, 1997}}
| rowspan="2" |NV3
| SGS [[350 nm]]
| rowspan="2" |4<ref>{{Cite web|url=https://www.techpowerup.com/gpu-specs/nvidia-nv3.g227|title=NVIDIA NV3 GPU Specs {{pipe}} TechPowerUp GPU Database |date=25 August 2023}}</ref>
| rowspan="2" |90
| AGP 1x,<ref>{{cite web |url=http://www.nvidia.com/object/RIVA_128_FAQ.html |title=RIVA 128/ZX/TNT FAQ |access-date=June 15, 2018 |archive-url=https://web.archive.org/web/20180616103247/http://www.nvidia.com/object/RIVA_128_FAQ.html |archive-date=June 16, 2018 |url-status=live}}</ref> PCI
| rowspan="2" |100
| rowspan="2" |100
| 4
| rowspan="2" |1.6
| rowspan="9" |SDR
| rowspan="3" |128
| rowspan="2" |100
| rowspan="2" |100
| rowspan="2" |100
|
| rowspan="2" |1.0
| ?
|-
! style="text-align:left
| {{Date table sorting|February 23, 1998}}
| SGS/[[TSMC]] 350 nm
| rowspan="2" |AGP 2x, PCI
| 8
|
|
|-
! style="text-align:left
| {{Date table sorting|June 15, 1998}}
| NV4
| [[TSMC]] 350 nm
| 7<ref>{{Cite web|url=https://www.techpowerup.com/gpu-specs/nvidia-nv4.g226|title=NVIDIA NV4 GPU Specs {{pipe}} TechPowerUp GPU Database |date=25 August 2023}}</ref>
|
| 90
| 110
|
| 8<br />16
| 1.76
| 180
|
|
|
| rowspan="7" |1.2
|
|-
! style="text-align:left
| {{Date table sorting|March 22, 1999}}
| rowspan="3" |NV6
| rowspan="4" |TSMC [[250 nanometer|250 nm]]
| rowspan="3" |
| rowspan="3" |
| AGP 4x, PCI
| 100
|
| 8<br />16
| 1.0
| rowspan="3" |64
| 200
| 200
|
|
|
|-
! style="text-align:left
| {{Date table sorting|March 2000}}
|
| 80
| 100
| 8<br />16
| 0.8
| 160
| 160
| 160
|
| ?
|-
! style="text-align:left
| {{Date table sorting|October 1999}}
| rowspan="4" |AGP 4x, PCI
| rowspan="2" |125
| rowspan="2" |150
| 8<br />16<br />32
| 1.2
| rowspan="2" |250
| rowspan="2" |250
| rowspan="2" |250
|
| ?
|-
! style="text-align:left
| {{Date table sorting|March 15, 1999}}
| rowspan="3" |NV5
| rowspan="3" |15<ref>{{Cite web|url=https://www.techpowerup.com/gpu-specs/nvidia-nv5b.g333|title=NVIDIA NV5 GPU Specs {{pipe}} TechPowerUp GPU Database |date=21 September 2023}}</ref>
| rowspan="3" |90
| 16<br />32
| 2.4
| rowspan="3" |128
|
|
|-
! style="text-align:left
| {{Date table sorting|October 12, 1999}}
| TSMC 220 nm
| 143
| 166
| 16<br />32
| 2.656
| 286
|
|
|
|
|-
! style="text-align:left
| {{Date table sorting|March 15, 1999}}
| TSMC 250 nm
| 150
| 183
| 16<br />32
| 2.928
| 300
| 300
| 300
|
|
|}
{{notelist}}
===GeForce 256 series===
{{Further|GeForce 256|Celsius (microarchitecture)}}
* All models are made via [[TSMC]] [[250 nm process|220 nm]] fabrication process
* All models support [[Direct3D]] 7.0 and [[OpenGL]] 1.2
* All models support hardware Transform and Lighting (T&L) and Cube Environment Mapping
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2"
! rowspan="2"
! rowspan="2" |[[Code
! rowspan="2" |
! rowspan="2" |Die size (mm<sup>2</sup>)
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" |Core
! rowspan="2" |Memory clock ([[Hertz|MHz]])
! rowspan="2" |Core config{{efn|name=geforce 256 1|[[Pixel pipeline]]s: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" |Memory
! colspan="4" |[[Fillrate]]
! rowspan="2" |Performance ([[FLOPS|MFLOPS]] [[FP32]])
! rowspan="2" |TDP (Watts)
|-
!
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! MOps/s
! MPixels/s
! MTexels/s
! MVertices/s
|-
! style="text-align:left" |GeForce 256 SDR<ref>{{Cite web|title=4x AGP GeForce 256 Graphics Accelerator|url=http://vgamuseum.info/images/doc/nvidia/gf256/geforce256_graphics.pdf|access-date=September 26, 2022|website=vgamuseum.info|archive-date=7 January 2024|archive-url=https://web.archive.org/web/20240107050758/http://vgamuseum.info/images/doc/nvidia/gf256/geforce256_graphics.pdf|url-status=live}}</ref>
| {{Dts|1999|October|11|format=mdy|abbr=on}}
| rowspan="2" |NV10
| rowspan="2" |17
| rowspan="2" |139
| rowspan="2" |{{nowrap|AGP 4x}}, PCI
| rowspan="2" |120
| 166
|
| rowspan="2" |32<br />64
| 2.
| SDR
| rowspan="2" |128
| rowspan="2" |480
| rowspan="2" |480
| rowspan="2" |480
| rowspan="2" |0
| rowspan="2" |960
| 13
|-
! style="text-align:left" |GeForce 256 DDR<ref>{{cite web|title=NVIDIA GeForce 256 DDR Specs|url=https://www.techpowerup.com/gpu-specs/geforce-256-ddr.c734|access-date=February 16, 2021|website=TechPowerUp|language=en}}</ref>
| {{Dts|1999|December|13|format=mdy|abbr=on}}
|
|
| DDR
|
|}
{{notelist}}
===
{{Further|GeForce 2 series|Celsius (microarchitecture)}}
* All models support [[Direct3D]] 7 and [[OpenGL]] 1.2
* All models support TwinView Dual-Display Architecture, Second Generation Transform and Lighting (T&L),<br />Nvidia Shading Rasterizer (NSR), High-Definition Video Processor (HDVP)
* GeForce2 MX models support Digital Vibrance Control (DVC)
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" style="vertical-align: bottom"|Model{{spaces}}name
! rowspan="2" style="vertical-align: bottom"|Launch
! rowspan="2" |[[Code name]]
! rowspan="2" style="vertical-align: bottom"|[[Semiconductor device fabrication|Fab]]{{spaces}}([[nanometer|nm]])<ref name="vintage3d"/>
! rowspan="2" |Transistors (million)
! rowspan="2" |Die size (mm<sup>2</sup>)
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" |Core clock ([[Hertz|MHz]])
! rowspan="2" |Memory clock ([[Hertz|MHz]])
! rowspan="2" |Core config{{efn|name=geforce 2 1|[[Pixel pipeline]]s: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" |Memory
! colspan="4" |[[Fillrate]]
! rowspan="2" |Performance ([[FLOPS|GFLOPS]] [[FP32]])
! rowspan="2" |TDP (Watts)
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! MOps/s
! MPixels/s
! MTexels/s
! MVertices/s
|-
! style="text-align:left
|
| rowspan="4" |NV1A (IGP)/<br />NV11 (MX)
| rowspan="6" |[[TSMC]]<br />[[180 nm]]
| rowspan="4" |20<ref>{{cite web|title=NVIDIA GeForce2 MX PCI Specs|url=https://www.techpowerup.com/gpu-specs/geforce2-mx-pci.c791|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| rowspan="4" |64
|
| rowspan="3" |175
| 133
| rowspan="4" |2:4:2
| Up to 32 system RAM
| 2.128<br />4.256
|
| 64<br />128
| rowspan="3" |350
| rowspan="3" |350
| rowspan="3" |700
| rowspan="8" |0
| 0.700
| 3
|-
! style="text-align:left
|
| rowspan="3" |{{nowrap|AGP 4x}}, PCI
| rowspan="2" |166
| rowspan="6" |32<br />64
|
| rowspan="2" |SDR
| 64
|
| 1
|-
! style="text-align:left
|
|
| 128
|
|
|-
! style="text-align:left
|
| rowspan="3" |200
| 166/200 (SDR)<br />166 (DDR)
|
| SDR<br />DDR
| 64/128 (SDR)<br />64 (DDR)
|
| 400
| 800
|
|
|-
! style="text-align:left
|
| rowspan="4" |NV15
| rowspan="4" |25<ref>{{Cite web|url=https://www.techpowerup.com/gpu-specs/nvidia-nv15.g339|title=NVIDIA NV15 GPU Specs {{pipe}} TechPowerUp GPU Database|accessdate=15 April 2024}}</ref>
| rowspan="4" |88
| rowspan="4" |AGP 4x
|
| rowspan="4" |4:8:4
|
| rowspan="4" |DDR
| rowspan="4" |128
| rowspan="2" |800
| rowspan="2" |800
| rowspan="2" |1,600
| 1.
| 6
|-
! style="text-align:left
|
| rowspan="2" |200
| rowspan="2" |6.4
|
| ?
|-
! style="text-align:left
|
| TSMC<br />150 nm
| rowspan="2" |250
| rowspan="2" |1,000
| rowspan="2" |1,000
| rowspan="2" |2,000
|
| ?
|-
! style="text-align:left
| August 14, 2000
| TSMC<br />180 nm
|
|
|
|
| ?
|}
{{notelist}}
===GeForce3 series===
{{Further|GeForce 3 series|Kelvin (microarchitecture)}}
* All models are made via [[TSMC]] [[180 nm process|150 nm]] fabrication process
* All models support [[Direct3D]] 8.0 and [[OpenGL]] 1.3
* All models support 3D Textures, Lightspeed Memory Architecture (LMA), nFiniteFX Engine, Shadow Buffers
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" style="vertical-align: bottom"|Model{{spaces}}name
! rowspan="2" style="vertical-align: bottom"|Launch
! rowspan="2" |[[Code name]]
! rowspan="2" |Transistors (million)
! rowspan="2" |Die size (mm<sup>2</sup>)
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" |Core clock ([[Hertz|MHz]])
! rowspan="2" |Memory clock ([[Hertz|MHz]])
! rowspan="2" |Core config{{efn|name=geforce 3 1|[[Pixel shader]]s: [[vertex shader]]s: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" |Memory
! colspan="4" |[[Fillrate]]
! rowspan="2" |Performance ([[FLOPS|GFLOPS]] [[FP32]])
! rowspan="2" |TDP (Watts)
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! MOps/s
! MPixels/s
! MTexels/s
! MVertices/s
|-
! style="text-align:left
|
| rowspan="3" |NV20
| rowspan="3" |57
| rowspan="3" |128
| rowspan="3" |{{nowrap|AGP 4x}}, PCI
| 175
| 200
| rowspan="3" |4:1:8:4
| 64<br />128
| 6.4
| rowspan="3" |DDR
| rowspan="3" |128
| 700
| 700
| 1400
| 43.75
| 8.750
| ?
|-
! style="text-align:left" |GeForce3
| February 27, 2001
| 200
| 230
| 64
| 7.36
| 800
| 800
| 1600
|
|
|
|-
! style="text-align:left
|
|
|
|
| 8.0
|
| 960
| 1920
|
|
|
|}
{{notelist}}
===
{{Further|GeForce 4 series|Kelvin (microarchitecture)}}
* All models are manufactured via [[TSMC]] [[180 nm process|150 nm]] manufacturing process
* All models support Accuview Antialiasing (AA), Lightspeed Memory Architecture II (LMA II), nView
{{sort-under}}
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" style="vertical-align: bottom"|Model{{spaces}}name
! rowspan="2" style="vertical-align: bottom"|Launch
! rowspan="2" |[[Code name]]
! rowspan="2" |Transistors (million)
! rowspan="2" |Die size (mm<sup>2</sup>)
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" |Core clock ([[Hertz|MHz]])
! rowspan="2" |Memory clock ([[Hertz|MHz]])
! rowspan="2" |Core config{{efn|name=geforce 4 1|[[Pixel shader]]s: [[vertex shader]]s: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" |Memory
! colspan="4" |[[Fillrate]]
! rowspan="2" |Performance ([[FLOPS|GFLOPS]] [[FP32]])
! colspan="2" |Supported{{spaces}}[[Application programming interface|API]]{{spaces}}version
! rowspan="2" |TDP (Watts)
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! MOps/s
! MPixels/s
! MTexels/s
! MVertices/s
! [[Direct3D]]
! [[OpenGL]]
|-
! style="text-align:left
|
|
|
|
|
| rowspan="5" |250
| 133<br />200
| rowspan="8" |2:0:4:2
| Up to 128 system RAM
| 2.128<br />6.4
| DDR
| 64<br />128
| rowspan="5" |500
| rowspan="5" |500
| rowspan="2" |1,000
| rowspan="5" |125
| 1.000
| rowspan="8" |7.0
| rowspan="8"|1.2
| ?
|-
! style="text-align:left" |GeForce4 MX420
| February 6, 2002
| rowspan="2" |NV17
| rowspan="2" |29<ref>{{cite web|title=NVIDIA GeForce4 MX 420 PCI Specs|url=https://www.techpowerup.com/gpu-specs/geforce4-mx-420-pci.c778|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| rowspan="2" |65
| rowspan="2" |AGP 4x<br />PCI
| 166
| 64
|
| SDR<br />DDR
| 128 (SDR)<br />64 (DDR)
|
|
|-
! style="text-align:left
|
| 133<br />166
| rowspan="2" |64<br />128
| 2.128<br />5.312<ref name=":13">[https://gpuz.techpowerup.com/23/07/11/w8r.png Archived copy] {{Webarchive|url=https://web.archive.org/web/20230711072328/https://gpuz.techpowerup.com/23/07/11/w8r.png|date=2023-07-11}}</ref>
| rowspan="12" |DDR
| 64<br />128
| 500<ref name=":13" /><br />1000
|
| 13
|-
! style="text-align:left" |GeForce MX4000
| December 14, 2003
| rowspan="2" |NV18B
| rowspan="2" |29
| rowspan="2" |65
| AGP 8x<br />PCI
| rowspan="2" |166
| rowspan="2" |2.656
| rowspan="2" |64
| rowspan="2" |1000
|
| 14
|-
! style="text-align:left" |GeForce PCX4300
| February 19, 2004
| PCIe x16
| 128
|
|
|-
! style="text-align:left
|
| NV17
|
| 65
| AGP 4x<br />PCI
| rowspan="2" |275
|
| rowspan="5" |64<br />128
| 6.4
| 128
| rowspan="2" |550
| rowspan="2" |550
| rowspan="2" |1,100
| rowspan="2" |137.5
| 1.100
| 18
|-
! style="text-align:left
|
| NV18
| 29<ref>{{cite web|title=NVIDIA GeForce4 MX 440-8x Specs|url=https://www.techpowerup.com/gpu-specs/geforce4-mx-440-8x.c2132|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
|
| AGP 8x<br />PCI
| 166<br />250
| 2.656<ref>{{Cite web |title=mx4408x-64bit166-Mhz |url=https://ibb.co/7XLyNBm |access-date=2022-12-25 |website=ImgBB |language=en |archive-date=25 December 2022 |archive-url=https://web.archive.org/web/20221225235549/https://ibb.co/7XLyNBm |url-status=usurped}}</ref><br />8.0
| 64<br />128
|
|
|-
! style="text-align:left
|
| NV17
|
|
| AGP 4x<br />PCI
| 300
|
| 8.8
| rowspan="7" |128
|
|
| 1
| 150
| 1.200
|
|-
! style="text-align:left
|
| NV25
| 63<ref>{{cite web |title=NVIDIA GeForce4 Ti 4200 Specs |url=https://www.techpowerup.com/gpu-specs/geforce4-ti-4200.c2133 |access-date=2021-02-16 |website=TechPowerUp |language=en}}</ref>
| 142
| AGP 4x
| rowspan="2" |250
| 222 (128 MiB)<br />250 (64 MiB)
| rowspan="6" |4:2:8:4
| 7.104 (128 MiB)<br />8.0 (64 MiB)
| rowspan="2" |1,000
| rowspan="2" |1,000
| rowspan="2" |2,000
| rowspan="2" |125
|
| rowspan="6" |8.1
| rowspan="6" |1.3
| 33
|-
! style="text-align:left
|
| NV28
| 63<ref>{{cite web |title=NVIDIA GeForce4 Ti 4200-8X Specs |url=https://www.techpowerup.com/gpu-specs/geforce4-ti-4200-8x.c182 |access-date=2022-11-20 |website=TechPowerUp |language=en}}</ref>
| 142
| AGP 8x
| 250
|
|
|
|-
! style="text-align:left
|
| NV25
|
| 142
| AGP 4x
| rowspan="2" |275
| rowspan="2" |275
| rowspan="4" |128
| rowspan="2
| rowspan="2" |1,100
| rowspan="2" |1,100
| rowspan="2" |2,200
| rowspan="2" |137.5
|
|
|-
! style="text-align:left" |GeForce4 Ti4400 8x <br />(Ti4800SE{{efn|name=geforce 4 2|GeForce4 Ti4400 8x: Card manufacturers utilizing this chip labeled the card as a Ti4800SE. The surface of the chip has "Ti-8x" printed on it.}})
|
| NV28
|
| 101
| AGP 8x
|
|
|-
! style="text-align:left
|
| NV25
|
| 142
| AGP 4x
| rowspan="2" |300
| rowspan="2" |325
| rowspan="2" |10.4
| rowspan="2" |1,200
| rowspan="2" |1,200
| rowspan="2" |2,400
| rowspan="2" |150
|
|
|-
! style="text-align:left" |GeForce4 Ti4600 8x <br />(Ti4800{{efn|name=geforce 4 3|GeForce4 Ti4600 8x: Card manufacturers utilizing this chip labeled the card as a Ti4600, and in some cases as a Ti4800. The surface of the chip has "Ti-8x" printed on it, as well as "4800" printed at the bottom.}})
|
| NV28
|
| 101
| AGP 8x
|
|
|}
{{notelist}}
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model
! colspan="2" | Features
|-
! nFiniteFX II Engine
! Video Processing Engine (VPE)
|-
! style="text-align:left;" | GeForce4 MX420
| {{no}}
| {{yes}}
|-
! style="text-align:left;" | GeForce4 MX440 SE
| {{no}}
| {{yes}}
|-
! style="text-align:left;" | GeForce4 MX4000
| {{no}}
| {{yes}}
|-
! style="text-align:left;" | GeForce4 PCX4300
| {{no}}
| {{yes}}
|-
! style="text-align:left;" | GeForce4 MX440
| {{no}}
| {{yes}}
|-
! style="text-align:left;" | GeForce4 MX440 8X
| {{no}}
| {{yes}}
|-
! style="text-align:left;" | GeForce4 MX460
| {{no}}
| {{yes}}
|-
! style="text-align:left;" | GeForce4 Ti4200
| {{yes}}
| {{no}}
|-
! style="text-align:left;" | GeForce4 Ti4200 8x
| {{yes}}
| {{no}}
|-
! style="text-align:left;" | GeForce4 Ti4400
| {{yes}}
| {{no}}
|-
! style="text-align:left;" | GeForce4 Ti4400 8x
| {{yes}}
| {{no}}
|-
! style="text-align:left;" | GeForce4 Ti4600
| {{yes}}
| {{no}}
|-
! style="text-align:left;" | GeForce4 Ti4600 8x
| {{yes}}
| {{no}}
|}
===GeForce FX (5xxx) series===
{{Further|GeForce FX series|Rankine (microarchitecture)}}
* All models support [[Direct3D]] 9.0a and [[OpenGL]] 1.5 (2.1 (software) with latest drivers)
* The GeForce FX series runs vertex shaders in an array
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" |Model
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]
! rowspan="2" |[[Semiconductor device fabrication|Fab]] ([[nanometer|nm]])<ref name="vintage3d"/>
! rowspan="2" |Transistors (million)
! rowspan="2" |Die size (mm<sup>2</sup>)
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" |Core clock ([[Hertz|MHz]])
! rowspan="2" |Memory clock ([[Hertz|MHz]])
! rowspan="2" |Core config{{efn|name=geforce fx 1|[[Pixel shader]]s: [[vertex shader]]s: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" |Memory
! colspan="4" |[[Fillrate]]
! rowspan="2" |Performance ([[FLOPS|GFLOPS]]<br/>[[FP32]])
! rowspan="2" |TDP (Watts)
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! MOperations/s
! MPixels/s
! MTexels/s
! MVertices/s
|-
! style="text-align:left
|
| rowspan="5" |NV34
| rowspan="6" |[[TSMC]] 150 nm
| rowspan="5" |45<ref>{{cite web|title=NVIDIA NV34 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-nv34.g21|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| rowspan="5" |124
| rowspan="2" |AGP 8x
|
| rowspan="2" |166
| rowspan="10" |4:2:4:4
| 64<br /> 128
|
| rowspan="14" |DDR
|
| 800
| 800
| 800
| 100.0
| 12.0
| ?
|-
! style="text-align:left
| rowspan="2" |250
| rowspan="3" |64<br />128<br />256
| 2.6<br />5.3
| 64<br />128
| rowspan="2" |1,000
| rowspan="2" |1,000
| rowspan="2" |1,000
| rowspan="2" |125.0
| 15.0
| ?
|-
! style="text-align:left" |GeForce FX 5200
| AGP 8x<br />PCI
| 200
| 3.2<br />6.4
| 64<br />128
|
| 21
|-
! style="text-align:left" |GeForce FX 5200 Ultra
| March 6, 2003
| AGP 8x
| 325
|
| 10.4
| 128
|
| 1
| 1,300
| 162.5
| 19.5
| 32
|-
! style="text-align:left
|
| PCIe x16
|
| 166
| 128
| 2.6
| 64
| 1,000
| 1,000
| 1,000
| 125.0
| 15.0
| 21
|-
! style="text-align:left" |GeForce FX 5500
| March 2004
| NV34B
| 45<ref>{{cite web|title=NVIDIA GeForce FX 5500 PCI Specs|url=https://www.techpowerup.com/gpu-specs/geforce-fx-5500-pci.c62|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| 91
| AGP 8x<br />AGP 4x<br />PCI
| 270
| 166<br/>200
| 64<br />128<br />256
| 5.3<br/>6.4
| 128
|
|
|
|
| 16.2
| ?
|-
! style="text-align:left
|
| rowspan="4" |NV31
| rowspan="19" |TSMC [[130 nm]]
| rowspan="4" |80<ref>{{cite web|title=NVIDIA GeForce FX 5600 XT PCI Specs|url=https://www.techpowerup.com/gpu-specs/geforce-fx-5600-xt-pci.c63|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| rowspan="4" |121
| AGP 8x
| 235
|
| 64<br />128
| 3.2<br />6.4
| 64<br />128
| 940
| 940
|
|
|
|
|-
! style="text-align:left
|
| AGP 8x<br />PCI
| 325
|
| 64<br />128<br />256<ref>{{cite web |url=http://www.nvidia.com/page/fx_5600.html |title=GeForce FX 5600 |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20150925190010/http://www.nvidia.com/page/fx_5600.html |archive-date=2015-09-25 |url-status=live}}</ref>
| 8.8
| rowspan="3" |128
|
|
| 1
| 162.5
| 19.5
| 25
|-
! style="text-align:left
|
| rowspan="3" |AGP 8x
| 350
|
| rowspan="2" |64<br />128
| 11.2
|
|
|
|
| 21.0
| 27
|-
! style="text-align:left
| 400
| 400
| 12.8
|
|
|
|
| 24.0
| 31
|-
! style="text-align:left
| September 2004
| rowspan="6" |NV36
| rowspan="6" |82<ref>{{cite web|title=NVIDIA GeForce FX 5700 LE Specs|url=https://www.techpowerup.com/gpu-specs/geforce-fx-5700-le.c747|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| rowspan="6"|133
| rowspan="2"|250
| rowspan="2"|200
| rowspan="6"|4:3:4:4
| rowspan="3" |128<br />256
| rowspan="2" |3.2<br />6.4
| rowspan="2" |64<br />128
| rowspan="2" |1000
| rowspan="2" |1000
| rowspan="2" |1000
| rowspan="2" |187.5
| 17.5
| 20
|-
! style="text-align:left
| March 2004
| AGP 8x<br />PCI
|
| 21
|-
! style="text-align:left
|
| AGP 8x
| rowspan="2" |425
| rowspan="2" |250
| rowspan="2" |8.0
| rowspan="6"|128
| rowspan="2" |1,700
| rowspan="2" |1,700
| rowspan="2" |1,700
| rowspan="2" |318.7
| 29.7
| 20
|-
! style="text-align:left" |GeForce PCX 5750
| March 17, 2004
| PCIe x16
| 128
|
| 25
|-
! style="text-align:left
|
| rowspan="8" |AGP 8x
| rowspan="2" |475
|
| rowspan="2" |128<br/>256
| 14.4
|
| rowspan="2" |1,900
| rowspan="2" |1,900
| rowspan="2" |1,900
| rowspan="2" |356.2
| 33.2
| 43
|-
! style="text-align:left
|
| 475
| 15.2
|
|
|
|-
! style="text-align:left
| rowspan="2" |January 27, 2003
| rowspan="2" |NV30
| rowspan="2" |125<ref>{{cite web|title=NVIDIA GeForce FX 5800 Specs|url=https://www.techpowerup.com/gpu-specs/geforce-fx-5800.c703|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| rowspan="2" |199
|
| 400
| rowspan="2" |4:2:8:4
| rowspan="5" |128
| 12.8
| rowspan="2" |GDDR2
|
|
| 3,200
| 300.0
| 24.0
| 55
|-
! style="text-align:left
| 500
| 500
| 16.0
|
|
|
|
| 30.0
| 66
|-
! style="text-align:left
|
| rowspan="5" |NV35
| rowspan="5" |135<ref>{{cite web|title=NVIDIA GeForce FX 5900 Specs|url=https://www.techpowerup.com/gpu-specs/geforce-fx-5900.c77|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| rowspan="5" |207
|
| rowspan="2" |350
| rowspan="7" |4:3:8:4
| rowspan="2" |22.4
| rowspan="6" |DDR
| rowspan="7" |256
|
|
|
|
| 22.7
| ?
|-
! style="text-align:left" |GeForce FX 5900 XT
| December 15, 2003<ref>{{cite web |url=http://www.prnewswire.com/news-releases/nvidia-and-activision-launch-call-of-duty-bundle-with-geforce-fx-5900-graphics-cards-73319297.html |title=NVIDIA and Activision Launch "Call of Duty" Bundle with GeForce FX 5900 Graphics Cards |access-date=2017-03-31 |archive-url=https://web.archive.org/web/20170401055327/http://www.prnewswire.com/news-releases/nvidia-and-activision-launch-call-of-duty-bundle-with-geforce-fx-5900-graphics-cards-73319297.html |archive-date=2017-04-01 |url-status=live}}</ref>
| 390
| rowspan="2" |1,600
| rowspan="2" |1,600
| rowspan="2" |3,200
| rowspan="2" |300.0
| 27.3
| 48
|-
! style="text-align:left
| May
| 400
| rowspan="2" |425
| rowspan="2"|27.2
|
|
|-
! style="text-align:left
| May
| 450
| rowspan="2"|128<br/>256
|
|
|
|
|
|
|-
! style="text-align:left" |GeForce PCX 5900
| March 17, 2004
| PCIe x16
| 350
| 275
| 17.6
| 1,400
| 1,400
| 2,800
| 262.5
| 24.5
| 49
|-
! style="text-align:left
|
| rowspan="2" |NV38
| rowspan="2" |135<ref>{{cite web|title=NVIDIA GeForce FX 5950 Ultra Specs|url=https://www.techpowerup.com/gpu-specs/geforce-fx-5950-ultra.c79|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| rowspan="2" |207
| AGP 8x
| rowspan="2" |475
| 475
| rowspan="2" |256
| 30.4
| rowspan="2" |1,900
| rowspan="2" |1,900
| rowspan="2" |3,800
| rowspan="2" |356.2
| 33.2
| 83
|-
!
| February 17, 2004
| PCIe x16
| 425
| 27.2
| GDDR3
|
| 83
|-
! rowspan="2" |
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]
! rowspan="2" |Fab ([[nanometer|nm]])<ref name="vintage3d"/>
! rowspan="2" |Transistors (million)
! rowspan="2" |Die size (mm{{sup|2}})
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" |Core clock ([[Hertz|MHz]])
! rowspan="2" |Memory clock ([[Hertz|MHz]])
! rowspan="2" |Core config{{efn|name=geforce fx 1|[[Pixel shader]]s: [[vertex shader]]s: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" |Memory
! colspan="4" |[[Fillrate]]
! rowspan="2" |Performance ([[FLOPS|GFLOPS]]<br/>[[FP32]])
! rowspan="2" |TDP (Watts)
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! MOperations/s
! MPixels/s
! MTexels/s
! MVertices/s
|}
{{notelist}}
===GeForce 6 (6xxx) series===
{{Further|GeForce 6 series|Curie (microarchitecture)}}
* All models support [[Direct3D]] 9.0c and [[OpenGL]] 2.1
* All models support Transparency [[Spatial anti-aliasing|AA]] (starting with version 91.47 of the ForceWare drivers) and PureVideo
* <math display="inline">
Gflops = \bigl(\text{PixelShader}\times12 + \text{VertexShader}\times8\bigr)\times\frac{\text{clock (MHz)}}{1000}
</math><ref name=":17" /><ref name=":16" />
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" |Model
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]
! rowspan="2" |[[Semiconductor device fabrication|Fab]] ([[nanometer|nm]])<ref name="vintage3d"/>
! rowspan="2" |Transistors (million)<br />Die size (mm<sup>2</sup>)
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" |Core clock ([[Hertz|MHz]])
! rowspan="2" |Memory clock ([[Hertz|MHz]])
! rowspan="2" |Core config{{efn|name=geforce 6 1|[[Pixel shader]]s: [[vertex shader]]s: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" |Memory
! colspan="4" |[[Fillrate]]
! rowspan="2" |Performance<ref name=":16">{{Cite web |date= |title=The GeForce 6800 |url=https://www.cs.cmu.edu/afs/cs/academic/class/15869-f11/www/readings/montrym05_geforce6800.pdf |url-status=live |archive-url=https://web.archive.org/web/20150922000610/http://www.cs.cmu.edu/afs/cs/academic/class/15869-f11/www/readings/montrym05_geforce6800.pdf |archive-date=2015-09-22 |access-date=}}</ref> (GFLOPS)<ref name=":17">{{Cite web |title=Chapter 30. The GeForce 6 Series GPU Architecture |url=https://developer.nvidia.com/gpugems/gpugems2/part-iv-general-purpose-computation-gpus-primer/chapter-30-geforce-6-series-gpu |url-status=live |archive-url=https://web.archive.org/web/20200115111706/https://developer.nvidia.com/gpugems/gpugems2/part-iv-general-purpose-computation-gpus-primer/chapter-30-geforce-6-series-gpu |archive-date=2020-01-15}}</ref>
! rowspan="2" |TDP (Watts)
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! MOperations/s
! MPixels/s
! MTexels/s
! MVertices/s
|-
! style="text-align:left" |GeForce 6100 + nForce 410
| October 20, 2005
| MCP51
| rowspan="4" |[[TSMC]] [[90 nm]]
|
| rowspan="4" |[[HyperTransport]]
| rowspan="3" |425
| 100–200 (DDR)<br />200–533 (DDR2)
| rowspan="5" |2:1:2:1
| rowspan="4" |Up to 256 system RAM
| 1.6–6.4 (DDR)<br />3.2–17.056 (DDR2)
| DDR<br />DDR2
| rowspan="4" |64<br />128
| rowspan="3" |850
| rowspan="3" |425
| rowspan="3" |850
| rowspan="3" |106.25
| 13.6
| ?
|-
! style="text-align:left" |GeForce 6150 SE + nForce 430
| rowspan="2" |June 2006
| MCP61
| rowspan="2" |
| 200<br />400{{citation needed|date=September 2012}}<!-- originally was 400-800 -->
| 3.2<br />16.0{{citation needed|date=September 2012}}<!-- originally was 3.2-16.0 -->
| DDR2
| 13.6
| ?
|-
! style="text-align:left" |GeForce 6150 LE + nForce 430
| MCP61
| rowspan="2" |100–200 (DDR)<br />200–533 (DDR2)
| 1.6–6.4 (DDR)<br />3.2–17.056 (DDR2)
| rowspan="2" |DDR<br />DDR2
| 13.6
| ?
|-
! style="text-align:left" |GeForce 6150 + nForce 430
| October 20, 2005
| MCP51
|
| 475
| 1.6–6.4 (DDR)<br />3.2–17.056 (DDR2)
| 950
| 475
| 950
|
|
|
|-
! style="text-align:left
| April 4, 2005
| NV44
| rowspan="8" |TSMC 110 nm
| 75<br />110<ref name=":5">{{cite web|title=NVIDIA NV44 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-nv44.g161|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| AGP 8x<br />PCIe x16
| 350
| 266
| 128<br />256
|
| DDR
| 64
|
|
| 700
| 87.5
| 11.2
| ?
|-
! style="text-align:left
|
| NV44A
| 75<br />110<ref>{{cite web|title=NVIDIA NV44B GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-nv44b.g427|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| AGP
| 300<br />350<ref name="gf6200xfx" />
| 250 (DDR)<br />250-333 (DDR2)<ref name="gf6200xfx">{{Cite web|url=http://www.xfxforce.com/en-us/products/graphiccards/6series/6200.aspx#2|archiveurl=https://web.archive.org/web/20080916224908/http://www.xfxforce.com/en-us/products/graphiccards/6series/6200.aspx#2|url-status=dead|title=GeForce 6 Series-6200 - XFXforce.com|archivedate=September 16, 2008}}</ref>
|
| 128<br />256<ref name="gf6200xfx" /><br />512<ref name="gf6200xfx" />
| 4<br />4-5.34 (DDR2)<ref name="gf6200specxfx">{{Cite web|url=http://xfxforce.com/web/product/listFeatures.jspa?series=GeForce%26%238482;+6200&seriesId=43|archiveurl=https://web.archive.org/web/20071012114632/http://xfxforce.com/web/product/listSpecifications.jspa?series=GeForce%26%238482;+6200&seriesId=43|url-status=dead|title=Products GeForce 6200:Features - XFXforce.com|archivedate=October 12, 2007}}</ref>
| DDR<br />DDR2<ref name="gf6200ddr2xfx">{{Cite web |title=GeForce 6200:Model PV-T44A-WANG - XFXforce.com |url=http://www.xfxforce.com/web/product/listConfigurationDetails.jspa?series=GeForce%26%238482%3B+6200&productConfigurationId=79772 |url-status=dead |archiveurl=https://web.archive.org/web/20080412013852/http://www.xfxforce.com/web/product/listConfigurationDetails.jspa?series=GeForce%26%238482%3B+6200&productConfigurationId=79772 |archivedate=April 12, 2008}}</ref>
| 64<ref name="gf6200ddr2xfx" />
| 1,400<ref name="gf6200xfx" />
| 700<ref name="gf6200xfx" />
| 1400<ref name="gf6200xfx" />
| 175<br />225<ref name="gf6200ddr2xfx" />
| 21.6>br />25.2
|
|-
! style="text-align:left
| October 12, 2004 (PCIe)<br />January 17, 2005 (AGP)
| NV43
| 146<br />154<ref name=":6">{{cite web|title=NVIDIA NV43 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-nv43.g7|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| AGP 8x<br />PCI<br />PCIe x16
| 300
|
|
| 128<br />256
| 8.8
| DDR
| 128
|
|
| 1,200
| 225
| 21.6
| 20
|-
! style="text-align:left
|
| rowspan="2" |NV44
| rowspan="2" |75<br />110<ref name=":5" />
| rowspan="2" |PCIe x16
| 350
| 200<br />275<br />350
| rowspan="2" |4:3:4:2
| 128–256 System RAM incl.16/32–64/128 onboard
| 3.2<br />4.4<br />5.6
| rowspan="3" |DDR
| rowspan="2" |64
|
|
|
| 262.5
| 25.2
| 25
|-
! style="text-align:left
|
|
|
| rowspan="4" |128<br />256
| 5.328
|
| 800
|
|
|
|
|-
! style="text-align:left
| 2005
| rowspan="3" |NV43
| rowspan="3" |146<br />154<ref name=":6" />
| rowspan="6" |AGP 8x<br />PCIe x16
| rowspan="2" |300
|
|
|
| rowspan="3" |128
|
| rowspan="2" |1,200
|
| rowspan="2" |225
|
| ?
|-
! style="text-align:left
| August 12, 2004
| 275<br />400
| rowspan="2" |8:3:8:4
| 8.8<br />12.8
| DDR<br />DDR2
| 2,400
| 2,400
| 36.0
| 26
|-
! style="text-align:left" |GeForce 6600 GT
| August 12, 2004 (PCIe)<br />November 14, 2004 (AGP)
| 500
| 475 (AGP)<br />500 (PCIe)
| 15.2 (AGP)<ref>{{cite web |url=http://www.techpowerup.com/gpudb/115/geforce-6600-gt-agp.html |title=Nvidia GeForce 6600 GT AGP |website=Techpowerup.com |access-date=2015-04-16 |archive-url=https://archive.today/20150416180159/http://www.techpowerup.com/gpudb/115/geforce-6600-gt-agp.html |archive-date=2015-04-16 |url-status=live}}</ref><br />16 (PCIe)
| GDDR3
|
|
|
| 375
| 60.0
| 47
|-
! style="text-align:left
| July 22, 2004 (AGP)<br />January 16, 2005 (PCIe)
| rowspan="3" |NV40 (AGP)<br />NV41, NV42 (PCIe)
| rowspan="4" |[[IBM]] 130 nm
| rowspan="3" |222<br />287 (NV40)<ref name=":7">{{cite web|title=NVIDIA NV40 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-nv40.g6|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref><br />222<br />225 (NV41)<ref>{{cite web|title=NVIDIA NV41 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-nv41.g162|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref><br />198<br />222 (NV42)<ref name=":8">{{cite web|title=NVIDIA NV42 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-nv42.g44|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| 320 (AGP)<br />325 (PCIe)
|
| rowspan="2" |8:4:8:8
|
| 22.4
| DDR
| 256
| 2,560 (AGP)<br />2,600 (PCIe)
| 2,560 (AGP)<br />2,600 (PCIe)
| 2,560 (AGP)<br />2,600 (PCIe)
| 320 (AGP)<br />325 (PCIe)
| 41.0<br />41.6
| ?
|-
! style="text-align:left
|
| 300 (64 Bit)<br />325
| 266 (64 Bit)<br />350<br />500 (GDDR3)
| 256
| 4.256<br />11.2<br />22.4<br />32 (GDDR3)
| DDR<br />DDR2<br />GDDR3
| 64<ref>{{cite web|url=https://www.biostar.com.tw/app/cn/vga/content.php?S_ID=40|title=映泰集团 :: V6802XA16 :: 产品规格|website=www.biostar.com.tw|access-date=2019-12-12|archive-date=12 October 2022|archive-url=https://web.archive.org/web/20221012210343/https://www.biostar.com.tw/app/cn/vga/content.php?S_ID=40|url-status=live}}</ref><br />128<ref>{{cite web|url=https://www.biostar.com.tw/app/en/vga/introduction.php?S_ID=37|title=VGA/GPU Manufacturer - BIOSTAR Group.|website=www.biostar.com.tw|language=en|access-date=2019-12-12|archive-date=12 October 2022|archive-url=https://web.archive.org/web/20221012210234/https://www.biostar.com.tw/app/en/vga/introduction.php?S_ID=37|url-status=live}}</ref><br />256
| 2,400<br />2,600
| 2,400<br />2,600
| 2,400<br />2,600
|
| 38.4<bf />41.6
|
|-
! style="text-align:left
| April 14, 2004 (AGP)<br />November 8, 2004 (PCIe)
| 325
|
| rowspan="3" |12:5:12:12
| 128<br />256
| 22.4
| DDR
| rowspan="6" |256
|
|
| 3,900
| 406.25
| 59.8
| 40
|-
! style="text-align:left
|
| NV45
| 222<br />287 (NV45)<ref name=":9">{{cite web|title=NVIDIA NV45 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-nv45.g163|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| PCIe x16
|
|
|
| 28.8
| rowspan="5" |GDDR3
|
|
|
| 437.5
| ?
| ?
|-
! style="text-align:left
| December 8, 2005 (AGP)<br />November 7, 2005 (PCIe)
| NV40 (AGP)<br />NV42 (PCIe)
| TSMC 110 nm
| 222<br />287 (NV40)<ref name=":7" /><br />198<br />222 (NV42)<ref name=":8" />
| AGP 8x<br />PCIe x16
| 350 (AGP)<br />425 (PCIe)
| rowspan="2" |500
| rowspan="2" |128<br />256
| rowspan="2" |32
| 5,100
| 5,100
| 5,100
| 531.25
| 64.4<br />78.2
| 59
|-
! style="text-align:left" |GeForce 6800 GT
| May 4, 2004 (AGP)<br />June 28, 2004 (PCIe)
| rowspan="2" |NV40 (AGP)<br />NV45 (PCIe)
| rowspan="3" |IBM 130 nm
| rowspan="2" |222<br />287 (NV40)<ref name=":7" /><br />222<br />287 (NV45)<ref name=":9" />
| rowspan="2" |AGP 8x<br />PCIe x16
| 350
| rowspan="3" |16:6:16:16
| 5
|
|
|
|
|
|-
! style="text-align:left
| May 4, 2004 (AGP)<br />June 28, 2004 (PCIe)<br />March 14, 2005 (512 MiB)
|
| 525 (512 MiB)<br />550 (256 MiB)
|
| 33.6 (512 MiB)<br />35.2 (256 MiB)
|
|
| 6,400
|
|
|
|-
! style="text-align:left
| May
| NV40
| 222<br />287 (NV40)<ref name=":7" />
| AGP
| 450
| 600
| 256
|
|
| 7,200
|
|
|
|
|-
! rowspan="2" |Model
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]
! rowspan="2" |Fab ([[nanometer|nm]])<ref name="vintage3d"/>
! rowspan="2" |Transistors (million)<br />Die size (mm<sup>2</sup>)
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" |Core clock ([[Hertz|MHz]])
! rowspan="2" |Memory clock ([[Hertz|MHz]])
! rowspan="2" |Core config{{efn|name=geforce 6 1|[[Pixel shader]]s: [[vertex shader]]s: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" |Memory
! colspan="4" |[[Fillrate]]
! rowspan="2" |Performance<ref name=":16" /> (GFLOPS)<ref name=":17" />
! rowspan="2" |TDP (Watts)
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! MOperations/s
! MPixels/s
! MTexels/s
! MVertices/s
|}
{{notelist}}
====Features====
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model
! colspan="4" | Features
|-
! OpenEXR HDR
! [[Scalable Link Interface]] (SLI)
! [[TurboCache]]
! [[Nvidia PureVideo|PureVideo]] WMV9 Decoding
|-
! style="text-align:left;" | GeForce 6100
|
|
| {{no}}
| {{Partial|Limited}}
|-
! style="text-align:left;" | GeForce 6150 SE
|
|
| {{Partial|Driver-Side Only}}
| {{Partial|Limited}}
|-
! style="text-align:left;" | GeForce 6150
| {{no}}
| {{no}}
| {{no}}
| {{yes}}
|-
! style="text-align:left;" | GeForce 6150 LE
| {{no}}
| {{no}}
| {{Partial|Driver-Side Only}}
| {{yes}}
|-
! style="text-align:left;" | GeForce 6200
| {{no}}
| {{no}}
| {{yes2}} Yes (PCIe only)
| {{yes}}
|-
! style="text-align:left;" | GeForce 6500
| {{no}}
| {{yes}}
| {{yes}}
| {{yes}}
|-
! style="text-align:left;" | GeForce 6600 LE
| {{yes}}
| {{yes2}} Yes (No SLI Connector)
| {{no}}
| {{yes}}
|-
! style="text-align:left;" | GeForce 6600
| {{yes}}
| {{yes2}} Yes (SLI Connector or PCIe Interface)
| {{no}}
| {{yes}}
|-
! style="text-align:left;" | GeForce 6600 DDR2
| {{yes}}
| {{yes2}} Yes (SLI Connector or PCIe Interface)
| {{no}}
| {{yes}}
|-
! style="text-align:left;" | GeForce 6600 GT
| {{yes}}
| {{yes}}
| {{no}}
| {{yes}}
|-
! style="text-align:left;" | GeForce 6800 LE
| {{yes}}
| {{no}}
| {{no}}
| {{no}}
|-
! style="text-align:left;" | GeForce 6800 XT
| {{yes}}
| {{yes2}} Yes (PCIe only)
| {{no}}
| {{yes2}} Yes (NV42 only)
|-
! style="text-align:left;" | GeForce 6800
| {{yes}}
| {{yes2}} Yes (PCIe only)
| {{no}}
| {{yes2}} Yes (NV41, NV42 only)
|-
! style="text-align:left;" | GeForce 6800 GTO
| {{yes}}
| {{yes}}
| {{no}}
| {{no}}
|-
! style="text-align:left;" | GeForce 6800 GS
| {{yes}}
| {{yes2}} Yes (PCIe only)
| {{no}}
| {{yes2}} Yes (NV42 only)
|-
! style="text-align:left;" | GeForce 6800 GT
| {{yes}}
| {{yes2}} Yes (PCIe only)
| {{no}}
| {{no}}
|-
! style="text-align:left;" | GeForce 6800 Ultra
| {{yes}}
| {{yes2}} Yes (PCIe only)
|
| {{no}}
|}
===GeForce 7 (7xxx) series===
{{Further|GeForce 7 series|Curie (microarchitecture)}}
* All models support [[Direct3D]] 9.0c and [[OpenGL]] 2.1
* All models support Transparency [[Spatial anti-aliasing|AA]] (starting with version 91.47 of the ForceWare drivers)
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" |Model
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]
! rowspan="2" |[[Semiconductor device fabrication|Fab]] ([[nanometer|nm]])<ref name="vintage3d"/>
! rowspan="2" |Transistors (million)
! rowspan="2" |Die size (mm<sup>2</sup>)
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" |Core clock ([[Hertz|MHz]])
! rowspan="2" |Memory clock ([[Hertz|MHz]])
! rowspan="2" |Core config{{efn|name=geforce 7 1|[[Pixel shader]]s: [[vertex shader]]s: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" |Memory
! colspan="4" |[[Fillrate]]
! rowspan="2" |Performance (GFLOPS)<ref name=":17" /><ref name=":16" />
! rowspan="2" |TDP (Watts)
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! MOperations/s
! MPixels/s
! MTexels/s
! MVertices/s
|-
! style="text-align:left
| rowspan="5" |July 2007
| MCP68S
| rowspan="2" |[[TSMC]] 110 nm
|
|
| rowspan="2" |[[HyperTransport]]
| rowspan="2" |425
| rowspan="2" |200 (DDR)<br />400 (DDR2)<br />933 (DDR3)
| rowspan="5" |2:1:2:2
| rowspan="5" |Up to 256 system RAM
| rowspan="2" |6.4<br />12.8<br />34
| rowspan="2" |DDR<br />DDR2<br />DDR3
| rowspan="2" |64<br />128
| rowspan="2" |850
| rowspan="2" |850
| rowspan="2" |850
| rowspan="2" |106.25
| 13.6
| ?
|-
! style="text-align:left" |GeForce 7050PV + nForce 630a
| MCP67QV
|
|
| 13.6
| ?
|-
! style="text-align:left" |GeForce 7050 + nForce 610i/630i
| MCP73
| rowspan="3" |TSMC 90 nm
|
|
| HyperTransport/[[Front-side bus|FSB]]
| 500
| 333
| 5.336
| rowspan="3" |DDR2
| rowspan="3" |64
| 1,000
| 1,000
| 1,000
| 125
| 16.0
| ?
|-
! style="text-align:left" |GeForce 7100 + nForce 630i
| rowspan="2" |MCP76
|
|
| rowspan="2" |FSB
| 600
| rowspan="2" |400
| rowspan="2" |6.4
| 1,200
| 1,200
| 1,200
| 150
| 19.2
| ?
|-
! style="text-align:left" |GeForce 7150 + nForce 630i
|
|
| 630
| 1,260
| 1,260
| 1,260
| 157.5
| 20.2
| ?
|-
! style="text-align:left" |GeForce 7100 GS
| August 8, 2006
| NV44
| TSMC 110 nm
| 75<ref name=":5" />
| 110
| rowspan="5" |PCIe x16
| 350
| 266<br />333
|
| rowspan="2" |128<br />256
|
|
| rowspan="4" |32<br />64
| 1,400
| 700
| 1,400
| 262.5
| 25.2
| ?
|-
! style="text-align:left" |GeForce 7200 GS
| January 18, 2006
| rowspan="4" |G72
| rowspan="8" |TSMC 90 nm
| rowspan="4" |112<ref name=":10">{{cite web|title=NVIDIA G72 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-g72.g46|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| rowspan="4" |81
| 450
| 400
| 2:2:4:2
| 3.2<br />6.4
| rowspan="4" |DDR2
| rowspan="3" |1,800
| rowspan="3" |900
| rowspan="3" |1,800
| rowspan="3" |337.5
| 18.0
| 11
|-
! style="text-align:left" |GeForce 7300 SE
| rowspan="2" |March 22, 2006
| rowspan="2" |350
| rowspan="2" |333
| rowspan="3" |4:3:4:2
| rowspan="2" |128
| rowspan="2" |2.656<br />5.328
| 25.2
| ?
|-
! style="text-align:left" |GeForce 7300 LE
| 25.2
| 13
|-
! style="text-align:left" |GeForce 7300 GS
| January 18, 2006
| 550
| 400
| rowspan="2" |128<br />256
| 6.4
| 64
|
|
| 2,200
| 412.5
| 39.6
| 10
|-
! style="text-align:left
|
| G73
| 177<ref name=":11">{{cite web|title=NVIDIA G73 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-g73.g48|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| 125
| AGP 8x<br />PCIe x16
| 350
| 325 (DDR2)<br />700 (GDDR3)
| 8:5:8:4
| 10.4<br />22.4
| DDR2<br />GDDR3
| 128
| 2,800
| 1,400
| 2,800
| 437.5
| 47.6
| 24
|-
! style="text-align:left" |GeForce 7500 LE
| 2006
| G72
| 112<ref name=":10" />
| 81
| PCIe x16
| 475<br />550
| 405<br />324
|
| 64<br />128<br />256
| 6.480<br />5.2
| DDR2
|
|
|
| 2,200
| 593.8
| 34.2<br />39.6
| 10
|-
! style="text-align:left
| March 22, 2006 (PCIe)<br />July 1, 2006 (AGP)
| rowspan="2" |G73
| rowspan="4" |177<ref name=":11" />
| rowspan="4" |125
| rowspan="3" |AGP 8x<br />PCIe x16
| 400
| rowspan="3" |400 (DDR2)<br />700 (GDDR3)
| rowspan="4" |12:5:12:8
| rowspan="6" |256
| rowspan="3" |12.8<br />22.4
| rowspan="3" |DDR2<br />GDDR3
| rowspan="4" |128
| 4,800
| 3,200
| 4,800
| 500
| 73.6
| 30
|-
! style="text-align:left" |GeForce 7600 GT
| March 9, 2006 (PCIe)<br />July 15, 2006 (AGP)
| rowspan="2" |560
| rowspan="2" |6,720
| rowspan="2" |4,480
| rowspan="2" |6,720
| rowspan="2" |700
| 103.0
| ?
|-
! style="text-align:left" |GeForce 7600 GT 80 nm
| January 8, 2007
| G73-B1
| rowspan="2" |TSMC 80 nm
| 103.0
| 48
|-
! style="text-align:left" |GeForce 7650 GS
| March 22, 2006
| G73
| PCIe x16
| 450
|
|
| DDR2
| 5,400
| 3,600
| 5,400
| 562.5
| 82.8
| ?
|-
! style="text-align:left" |GeForce 7800 GS
| February 2, 2006
| rowspan="3" |G70
| rowspan="3" |TSMC 110 nm
| rowspan="3" |302<ref>{{cite web|title=NVIDIA G70 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-g70.g40|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| rowspan="3" |333
| AGP 8x
| 375
| 600
| 16:8:16:8
| 38.4
| rowspan="10" |GDDR3
| rowspan="10" |256
| 6,000
| 3,000
| 6,000
| 750
| 96.0
| 70
|-
! style="text-align:left" |GeForce 7800 GT
| August 11, 2005
| rowspan="2" |PCIe x16
| 400
| 500
| 20:7:20:16
| 32
| 8,000
| 6,400
| 8,000
| 700
| 118.4
| 59
|-
! style="text-align:left" |GeForce 7800 GTX
| June 22, 2005 (256 MiB)<br />November 14, 2005 (512 MiB)
| 430 (256 MiB)<br />550 (512 MiB)
| 600 (256 MiB)<br />850 (512 MiB)
| 24:8:24:16
| 256<br />512
| 38.4 (256 MiB)<br /> 54.4 (512 MiB)
| 10,320 (256 MiB)<br />13,200 (512 MiB)
| 6,880 (256 MiB)<br />8800 (512 MiB)
| 10,320 (256 MiB)<br />13,200 (512 MiB)
| 860 (256 MiB)<br />1,100 (512 MiB)
| 151.4< br />193.6
| 100 (256 MiB)<br /> 116 (512 MiB)
|-
! style="text-align:left" |GeForce 7900 GS
| May 2006 (PCIe)<br />April 2, 2007 (AGP)
| rowspan="4" |G71
| rowspan="7" |TSMC 90 nm
| rowspan="7" |278<ref>{{cite web|title=NVIDIA G71 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-g71.g47|access-date=2021-02-16|website=TechPowerUp|language=en}}</ref>
| rowspan="7" |196
| AGP 8x<br />PCIe x16
| rowspan="2" |450
| rowspan="3" |660
| 20:7:20:16
| rowspan="2" |256
| rowspan="3" |42.24
| 9,000
| rowspan="2" |7,200
| 9,000
| 787.5
| 133.2
| 50
|-
! style="text-align:left" |GeForce 7900 GT
| March 9, 2006
| rowspan="4" |PCIe x16
| rowspan="3" |24:8:24:16
| 10,800
| 10,800
| 900
|
|
|-
! style="text-align:left
|
| rowspan="2" |650
| rowspan="2" |512
| rowspan="2" |15,600
| rowspan="2" |10,400
| rowspan="2" |15,600
| rowspan="2" |1,300
| 228.8
| ?
|-
! style="text-align:left" |GeForce 7900 GTX
| rowspan="2" |March 9, 2006
| 800
| 51.2
| 228.8
| 105
|-
! style="text-align:left" |GeForce 7900 GX2
| 2x G71
| 500
| 600
| 2x 24:8:24:16
| 2x 512
| 2x 38.4
| 24,000
| 16,000
| 24,000
| 2,000
| 352.0
| ?
|-
! style="text-align:left" |GeForce 7950 GT
| September 6, 2006 (PCIe)<br />April 2, 2007 (AGP)
| G71
| AGP 8x<br />PCIe x16
| 550
| 700
| 24:8:24:16
| 512
| 44.8
| 13,200
| 8,800
| 13,200
| 1,100
| 193.6
| 90
|-
! style="text-align:left" |GeForce 7950 GX2
| June 5, 2006
| 2x G71
| PCIe x16
| 500
|
| 2x 24:8:24:16
|
|
|
|
|
|
|
| 136
|-
! rowspan="2" |Model
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]
! rowspan="2" |Fab ([[nanometer|nm]])<ref name="vintage3d"/>
! rowspan="2" |Transistors (million)
! rowspan="2" |Die size (mm<sup>2</sup>)
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" |Core clock ([[Hertz|MHz]])
! rowspan="2" |Memory clock ([[Hertz|MHz]])
! rowspan="2" |Core config{{efn|name=geforce 7 1|[[Pixel shader]]s: [[vertex shader]]s: [[texture mapping unit]]s: [[render output unit]]s}}
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! MOperations/s
! MPixels/s
! MTexels/s
! MVertices/s
! rowspan="2" |Performance (GFLOPS)<ref name=":17" /><ref name=":16" />
! rowspan="2" |TDP (Watts)
|-
! colspan="4" |Memory
! colspan="4" |[[Fillrate]]
|}
{{notelist}}
====Features====
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model
! colspan="5" | Features
|-
! Gamma-correct antialiasing
! 64-bit OpenEXR HDR
! Scalable Link Interface (SLI)
! TurboCache
! Dual Link DVI
|-
! style="text-align:left;" | GeForce 7100 GS
| {{no}}
| {{no}}
| {{yes2}} Yes (PCIe only, No SLI bridge)
| {{yes}}
| {{no}}
|-
! style="text-align:left;" | GeForce 7200 GS
| {{yes}}
| {{yes}}
| {{no}}
| {{yes}}
| {{no}}
|-
! style="text-align:left;" | GeForce 7300 SE
| {{yes}}
| {{yes}}
| {{no}}
| {{yes}}
| {{no}}
|-
! style="text-align:left;" | GeForce 7300 LE
| {{yes}}
| {{yes}}
| {{no}}
| {{yes}}
| {{no}}
|-
! style="text-align:left;" | GeForce 7300 GS
| {{yes}}
|
| {{yes2}} Yes (PCIe only)
| {{yes}}
| {{no}}
|-
! style="text-align:left;" | GeForce 7300 GT
| {{yes}}
|
| {{yes2}} Yes (PCIe only, No SLI bridge)
| {{no}}
| {{yes|One port}}
|-
! style="text-align:left;" | GeForce 7600 GS
| {{yes}}
|
| {{yes2}} Yes (PCIe only)
| {{no}}
| {{yes|One port}}
|-
! style="text-align:left;" | GeForce 7600 GT
| {{yes}}
|
| {{yes2}} Yes (PCIe only)
| {{no}}
| {{yes|One port}}
|-
! style="text-align:left;" | GeForce 7600 GT
| {{yes}}
| {{yes}}
|
| {{no}}
| {{yes|One port}}
|-
! style="text-align:left;" | GeForce 7650 GS (80 nm)
| {{yes}}
|
| {{yes2}} Yes (Depending on OEM Design)
|
| {{yes|One port}}
|-
! style="text-align:left;" | GeForce 7800 GS
| {{yes}}
|
|
|
| {{yes|One port}}
|-
! style="text-align:left;" | GeForce 7800 GT
| {{yes}}
|
|
| {{no}}
| {{yes|One port}}
|-
! style="text-align:left;" | GeForce 7800 GTX
| {{yes}}
|
|
| {{no}}
| {{yes|One port}}
|-
! style="text-align:left;" | GeForce 7800 GTX 512
| {{yes}}
|
|
| {{no}}
| {{yes|One port}}
|-
! style="text-align:left;" | GeForce 7900 GS
| {{yes}}
| {{yes}}
| {{yes2}} Yes (PCIe only)
|
| {{yes|Two ports}}
|-
! style="text-align:left;" | GeForce 7900 GT
| {{yes}}
|
|
| {{no}}
| {{yes|Two ports}}
|-
! style="text-align:left;" | GeForce
| {{yes}}
|
|
| {{no}}
| {{yes|Two ports}}
|-
! style="text-align:left;" | GeForce 7900
| {{yes}}
|
|
| {{no}}
| {{yes|Two ports}}
|-
! style="text-align:left;" | GeForce 7900 GX2 (GTX Duo)
| {{yes}}
|
|
| {{no}}
| {{yes|Two ports}}
|-
! style="text-align:left;" | GeForce
| {{yes}}
| {{yes}}
| {{yes2}} Yes (PCIe only)
| {{no}}
| {{yes|Two ports}}
|-
! style="text-align:left;" | GeForce 7950 GX2
| {{yes}}
| {{yes}}
|
| {{no}}
| {{yes|Two ports}}
|}
===GeForce 8
{{Further|GeForce 8 series|Tesla (microarchitecture)}}
* All models support coverage sample anti-aliasing, angle-independent anisotropic filtering, and 128-bit OpenEXR HDR.
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" |[[Semiconductor device fabrication|Fab]] ([[Nanometer|nm]])<ref name="vintage3d"/>
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock rate
! rowspan="2" | Core config{{efn|name=geforce 8 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="2" | [[Fillrate]]
! colspan="4" | Memory
!Processing power ([[GFLOPS]]){{efn|name=geforce 8 3|To calculate the processing power, see [[Tesla (microarchitecture)#Performance|Performance]].}}
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | Comments
|-
! Core ([[Hertz|MHz]])
!
!
!
! Texture ([[Texel (graphics)|GT]]/s)
!
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
![[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
![[OpenCL]]
![[CUDA]]
|-
! style="text-align:left;" | GeForce 8100 mGPU<ref name="8x00_mGPU">{{cite web |url=http://www.tomshardware.com/reviews/amd-nvidia-chipset,1972-14.html |title=AMD and Nvidia Platforms Do Battle |website=Tomshardware.com |date=18 July 2008 |access-date=2015-12-11 |archive-date=27 January 2010 |archive-url=https://web.archive.org/web/20100127115556/http://www.tomshardware.com/reviews/amd-nvidia-chipset,1972-14.html |url-status=live}}</ref>
| rowspan="3" | 2008
| rowspan="3" | MCP78
| rowspan="5" |[[TSMC]] 80 nm
| {{unk}}
| {{unk}}
| rowspan="3" | PCIe 2.0 x16
| rowspan="3" | 500
| rowspan="2" | 1200
| rowspan="3" | 400<br />(system memory)
| rowspan="4" | 8:8:4
| rowspan="3" | 2
| rowspan="3" | 4
| Up to 512 from system memory
| rowspan="3" | 6.4<br />12.8
| rowspan="6" | DDR2
| rowspan="3" | 64<br />128
| rowspan="2" |28.8
| rowspan="6" | 10.0
| rowspan="18" | 3.3
| rowspan="3" |n/a
| rowspan="3" |n/a
| {{unk}}
| The block of decoding of HD-video PureVideo HD is disconnected
|-
! style="text-align:left;" | GeForce 8200 mGPU<ref name="8x00_mGPU" />
| {{unk}}
| {{unk}}
| gt
| {{unk}}
| rowspan="2" | [[PureVideo]] 3 with VP3
|-
! style="text-align:left;" | GeForce 8300 mGPU<ref name="8x00_mGPU" />
| {{unk}}
| {{unk}}
| 1500
| Up to 512 from system memory
|36
| {{unk}}
|-
! style="text-align:left;" | GeForce 8300 GS<ref name="G84_G86_Shader_Specs">{{cite web|url=http://www.theinquirer.net/default.aspx?article=38884 |archive-url=https://web.archive.org/web/20070925073721/http://www.theinquirer.net/default.aspx?article=38884|date=April 12, 2007|archive-date=September 25, 2007|url-status=dead |title=Nvidia GF8600/8500/8300 details revealed}}</ref>
| July 2007
| rowspan="2" | G86
| rowspan="3" | 210
| rowspan="2" | 127
| PCIe 1.0 x16
| rowspan="2" | 450
| rowspan="2" | 900
| rowspan="3" | 400
| rowspan="2" | 1.8
| rowspan="2" | 3.6
| 128<br />512
| rowspan="3" | 6.4
| rowspan="3" | 64
| 14.4
| rowspan="15" |1.1
| rowspan="3" |1.1
| rowspan="2" | 40
| OEM only
|-
! style="text-align:left;" | GeForce 8400 GS
| June 15, 2007
| PCIe 1.0 x16<br />PCI
| 16:8:4
| rowspan="2" | 128<br />256<br />512
| 28.8
| rowspan="4" |
|-
! style="text-align:left;" | GeForce 8400 GS rev.2
| December 10, 2007
| G98
| TSMC 65 nm
| 86
| PCIe 2.0 x16<br />PCIe x1<br />PCI
| 567
| 1400
| 8:8:4
| 2.268
| 4.536
| 22.4
| rowspan="2" | 25
|-
! style="text-align:left;" | GeForce 8400 GS rev.3
| July 12, 2010
| GT218
| TSMC 40 nm
| 260
| 57
| PCIe 2.0 x16
| 520<br />589
| 1230
| 400 (DDR2)<br />600 (DDR3)
| 8:4:4
| 2.08<br />2.356
| 2.08<br />2.356
| 512<br />1024
| 4.8<br />6.4<br />9.6
| DDR2<br />DDR3
| 32<br />64
|19.7
| 10.1
|1.2
|-
! style="text-align:left;" | GeForce 8500 GT
| April 17, 2007
| G86
| rowspan="4" | TSMC 80 nm
| 210
| 127
| PCIe 1.0 x16<br />PCI
| 450
| 900
| rowspan="2" | 400
| 16
|
|
| 256<br />512<br />1024
| rowspan="2" | 12.8
| rowspan="2" | DDR2
| rowspan="4" | 128
|28.8
| rowspan="11" | 10.0
| rowspan="5" |1.1
| 45
|-
! style="text-align:left;" | GeForce 8600
| April
| rowspan="3" | G84
| rowspan="3" | 289
| rowspan="3" | 169
| PCIe 1.0 x16
| rowspan="2" | 540
| 1180
|
| rowspan="2" | 4.32
|
| 256<br />512
|75.5
| rowspan="2" | 47
| OEM only
|-
! style="text-align:left;" | GeForce 8600 GT
| rowspan="2" | April 17, 2007
| PCIe 1.0 x16<br />PCI
| 1188
| 400<br />700
| rowspan="2" | 32:16:8
| 8.64
| 256<br />512<br />1024
| 12.8<br />22.4
| DDR2<br />GDDR3
|76
| rowspan="4" |
|-
! style="text-align:left;" | GeForce 8600 GTS
| PCIe 1.0 x16
| 675
| 1450
|
| 5.4
|
| 256<br />512
|
| rowspan="8" | GDDR3
|92.8
|
|-
! style="text-align:left;" | GeForce 8800
| January 2008
|
| TSMC 65 nm
| 754
| 324
| PCIe 2.0 x16
| 550
| 1375
| rowspan="3" | 800
| 96:48:12
| 6.6
| 26.4
| 384<br />768
| 38.4
| 192
|264
| 105
|-
! style="text-align:left;" | GeForce 8800 GTS (G80)
| February 12, 2007 (320) <br />November 8, 2006 (640)
| rowspan="2" | G80
| rowspan="2" | TSMC 90 nm
| rowspan="2" | 681
| rowspan="2" | 484
| rowspan="2" | PCIe 1.0 x16
| 513
| 1188
| 96:24:20
| 10.3
| 12.3
| 320<br />640
| rowspan="2" | 64
| rowspan="2" | 320
|228
| rowspan="2" |1.0
| 146
|-
! style="text-align:left;" | GeForce 8800 GTS 112 (G80)
| November 19, 2007
| 500
| 1200
| 112:28:{{efn|name=geforce 8 2|Full G80 contains 32 texture address units and 64 texture filtering units unlike G92 which contains 64 texture address units and 64 texture filtering units<ref>{{cite web |url=http://www.anandtech.com/show/2549/3 |title=Lots More Compute, a Leetle More Texturing - Nvidia's 1.4 Billion Transistor GPU: GT200 Arrives as the GeForce GTX 280 & 260 |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222075904/http://www.anandtech.com/show/2549/3 |archive-date=2015-12-22 |url-status=dead}}</ref><ref>{{cite news |url=http://www.anandtech.com/show/2116/6 |title=Digging deeper into the shader core - Nvidia's GeForce 8800 (G80): GPUs Re-architected for Direct3D 10 |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222104756/http://www.anandtech.com/show/2116/6 |archive-date=2015-12-22 |url-status=dead}}</ref>}}20
| 10
|
| 640
|268.8
| 150
| only XFX, EVGA and BFG models, very short-lived<ref>{{cite web |author=Volker Rißka |url=https://www.computerbase.de/2007-11/zwei-neue-geforce-8800-gts-bis-dezember/ |title=Zwei neue GeForce 8800 GTS bis Dezember |website=Computerbase.de |date=2007-11-03 |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20110718001216/http://www.computerbase.de/news/hardware/grafikkarten/nvidia/2007/november/zwei-neue-geforce-8800-gts-bis-dezember/ |archive-date=2011-07-18 |url-status=live}}</ref>
|-
! style="text-align:left;" | GeForce 8800
| October 29, 2007 (512)<br />December 11, 2007 (256, 1024)
| rowspan="2" | G92
| rowspan="2" | TSMC 65 nm
| rowspan="2" | 754
| rowspan="2" | 324
| rowspan="2" | PCIe 2.0 x16
| 600
| 1500
| 700 (256)<br />900 (512, 1024)
| 112:56:16
| 9.6
| 33.6
| 256<br />512<br />1024
| 57.6
| rowspan="2" | 256
|336
| rowspan="2" |1.1
| 125
| rowspan="4" |
|-
! style="text-align:left;" | GeForce 8800 GTS (G92)
| December 11, 2007
| 650
| 1625
| 970
| 128:64:16
| 10.4
| 41.6
| 512
| 62.1
|416
| 135
|-
! style="text-align:left;" | GeForce 8800 GTX
| November 8, 2006
| rowspan="2" | G80
| rowspan="2" | TSMC 90 nm
| rowspan="2" | 681
| rowspan="2" | 484
| rowspan="2" | PCIe 1.0 x16
| 575
| 1350
|
| rowspan="2" | 128:32:{{efn|name=geforce 8 2}}24
|
| 18.4
| rowspan="2" | 768
| 86.4
| rowspan="2" | 384
|345.6
| rowspan="2" |1.0
|
|-
! style="text-align:left;" | GeForce 8800 Ultra
| May
| 612
| 1500
|
| 14.7
|
| 103.7
|384
|
|
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Fab ([[Nanometer|nm]])<ref name="vintage3d"/>
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! rowspan="2" | Core config{{efn|name=geforce 8 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
![[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
![[OpenCL]]
![[CUDA]]
! rowspan="2" | TDP (Watts)
! rowspan="2" | Comments
|-
! colspan="3" | Clock rate
! colspan="2" | [[Fillrate]]
! colspan="4" | Memory
!Processing power ([[GFLOPS]]){{efn|name=geforce 8 3|To calculate the processing power, see [[Tesla (microarchitecture)#Performance|Performance]].}}
! colspan="4" | Supported API version
|}
{{notelist}}
====
* Compute Capability 1.1: has support for Atomic functions, which are used to write thread-safe programs.
* Compute Capability 1.2: for details see [[CUDA#Version features and specifications|CUDA]]
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model
! colspan="7" | Features
|-
! Scalable<br />Link<br />Interface<br />(SLI)
! 3-Way<br />SLI
! [[PureVideo]] HD<br />with VP1
! [[PureVideo]] 2 with VP2,<br />BSP Engine, and AES128 Engine
! [[PureVideo]] 3 with VP3,<br />BSP Engine, and AES128 Engine
! [[PureVideo]] 4 with VP4
! Compute<br />ability
|-
! style="text-align:left;" | GeForce 8300 GS (G86)
| {{no}}
| {{no}}
| {{no}}
| {{yes}}
| {{no}}
| {{no}}
| {{yes|1.1}}
|-
! style="text-align:left;" | GeForce
| {{no}}
| {{no}}
| {{no}}
| {{no}}
| {{yes}}
| {{no}}
| {{yes|1.1}}
|-
! style="text-align:left;" | GeForce 8400 GS Rev. 3 (GT218)
| {{no}}
| {{no}}
| {{no}}
| {{no}}
| {{no}}
| {{yes}}
| {{yes|1.2}}
|-
! style="text-align:left;" | GeForce 8500 GT
| {{yes}}
| {{no}}
| {{no}}
| {{yes}}
| {{no}}
| {{no}}
| {{yes|1.1}}
|-
! style="text-align:left;" | GeForce 8600 GT
| {{yes}}
| {{no}}
| {{no}}
| {{yes}}
| {{no}}
| {{no}}
| {{yes|1.1}}
|-
! style="text-align:left;" | GeForce 8600 GTS
| {{yes}}
| {{no}}
| {{no}}
| {{yes}}
| {{no}}
| {{no}}
| {{yes|1.1}}
|-
! style="text-align:left;" | GeForce
| {{yes}}
| {{no}}
| {{no}}
| {{yes}}
| {{no}}
| {{no}}
| {{yes|1.1}}
|-
! style="text-align:left;" | GeForce 8800 GTS (G80)
| {{yes}}
| {{no}}
| {{yes}}
| {{no}}
| {{no}}
| {{no}}
| {{no|1.0}}
|-
! style="text-align:left;" | GeForce 8800 GTS Rev. 2 (G80)
| {{yes}}
| {{no}}
| {{yes}}
| {{no}}
| {{no}}
| {{no}}
| {{no|1.0}}
|-
! style="text-align:left;" | GeForce 8800 GT (G92)
| {{yes}}
| {{no}}
| {{no}}
| {{yes}}
| {{no}}
| {{no}}
| {{yes|1.1}}
|-
! style="text-align:left;" | GeForce 8800
| {{yes}}
| {{no}}
| {{no}}
| {{yes}}
| {{no}}
| {{no}}
| {{yes|1.1}}
|-
! style="text-align:left;" | GeForce 8800 GTX
| {{yes}}
| {{yes}}
| {{yes}}
| {{no}}
| {{no}}
| {{no}}
| {{no|1.0}}
|-
! style="text-align:left;" | GeForce 8800 Ultra
| {{yes}}
| {{yes}}
| {{yes}}
| {{no}}
| {{no}}
| {{no}}
| {{no|1.0}}
|}
===GeForce
* All models support Coverage Sample Anti-Aliasing, Angle-Independent Anisotropic Filtering, 128-bit OpenEXR HDR
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | [[Semiconductor device fabrication|Fab]] ([[Nanometer|nm]])<ref name="vintage3d"/>
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock rate
! rowspan="2" | Core config{{efn|name=geforce 9 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" | Memory
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]]){{efn|name=geforce 9 2|To calculate the processing power see [[Tesla (microarchitecture)#Performance]].}}
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | Comments
|-
! Core ([[Hertz|MHz]])
!
!
!
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
!
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
|-
! style="text-align:left;" | GeForce 9300 mGPU
| rowspan="2" | October 2008
| MCP7A-S
| rowspan="2" | 65 nm
| rowspan="2" | 282
| rowspan="2" | 162
| rowspan="4" | PCIe 2.0 x16
| 450
| 1200
| rowspan="2" | 400<br />666
| rowspan="2" | 16:8:4
| rowspan="2" | Up to 512 from system memory
| rowspan="2" | 6.4/12.8<br />10.664/21.328
| rowspan="2" | DDR2<br />DDR3
| rowspan="2" | 64<br />128
|1.8
|3.6
| 57.6
| rowspan="17" | 10.0
| rowspan="17" | 3.3
| {{unk}}
| rowspan="2" | based on 8400 GS
|-
! style="text-align:left;" | GeForce 9400 mGPU
| MCP7A-U
| 580
| 1400
|2.32
|4.64
| 67.2
| 12
|-
! style="text-align:left;" | GeForce 9300 GE<ref name="pcinpact9300GE9300GS">{{cite web |url=http://www.pcinpact.com/affichage/43963-NVIDIA-9600GT-9300GE-9300GS/58125.htm |title=Nvidia GeForce 9300 GE |year=2008 |author=Nvidia Corporation |access-date=2010-02-22 |archive-url=https://web.archive.org/web/20090416090530/http://www.pcinpact.com/affichage/43963-NVIDIA-9600GT-9300GE-9300GS/58125.htm |archive-date=2009-04-16 |url-status=live}}</ref>
| rowspan="2" | June 2008
| rowspan="2" | G98
| rowspan="2" |[[TSMC]] 65 nm
| rowspan="2" | 210
| rowspan="2" | 86
| 540
| 1300
| rowspan="2" | 500
| rowspan="2" | 8:8:4
| rowspan="2" | 256
| rowspan="2" | 6.4<ref>{{cite web |url=https://www.techpowerup.com/gpu-specs/geforce-9300-ge.c1760 |title=NVIDIA GeForce 9300 GE Specs {{pipe}} TechPowerUp GPU Database |publisher=Techpowerup.com |date=August 22, 2022 |accessdate=2022-08-22}}</ref>
| rowspan="2" | DDR2
| rowspan="2" | 64
|2.16
|4.32
| 20.8
| rowspan="2" | 25
|
|-
! style="text-align:left;" | GeForce 9300 GS<ref name="pcinpact9300GE9300GS" />
| 567
| rowspan="4" | 1400
|2.268
|4.536
| 22.4
|
|-
! style="text-align:left;" | GeForce 9400 GT
| August 27, 2008
| G96-200-c1<br />G96a<br />G96b
| TSMC 55 nm
| rowspan="3" | 314
| rowspan="3" | 144
| rowspan="3" | PCIe 2.0 x16<br />PCI
| rowspan="3" | 550
| 400<br />800
| 16:8:4
| rowspan="3" | 256<br />512<br />1024
| 12.8<br />25.6
| DDR2<br />GDDR3
| rowspan="3" | 128
|2.2
|4.4
| 44.8
| rowspan="3" | 50
|
|-
!GeForce 9500 GS
|
|
|
|500
|24:12:4
|16.0
|DDR2
|
|
|60
|OEM
|-
! style="text-align:left;" | GeForce 9500 GT
| rowspan="2" | July 29, 2008
| G96-300-C1
| [[United Microelectronics Corporation|UMC]] 65 nm
| 500<br />800
| 32:16:8
| 16.0<br />25.6
| DDR2<br />GDDR3
|4.4
|8.8
| 89.6
|
|-
! style="text-align:left;" | GeForce 9600 GS
| G94a
| rowspan="2" | TSMC 65 nm
| 505
| 240
| rowspan="10" |PCIe 2.0 x16
| 500
| 1250
| 500
| 48:24:12
| 768
| 24.0
| DDR2
| rowspan="2" | 192
|6
|12
| 120
| {{unk}}
| OEM
|-
! style="text-align:left;" | GeForce 9600 GSO
| May 2008
| G92-150-A2
| 754
| 324
| 550
| 1375
| 800
| 96:48:12
| 384<br />768<br />1536
| 38.4
| rowspan="9" | GDDR3
|6.6
|26.4
| 264
| 84
| rowspan="2" |
|-
! style="text-align:left;" | GeForce 9600 GSO 512
| October 2008
| G94a<br />G94b
| TSMC 65 nm<br />TSMC 55 nm
| rowspan="3" | 505
| 240<br />196?{{citation needed|date=September 2012}}
| 650
| 1625
| 900
| 48:24:16
| 512
| 57.6
| rowspan="7" | 256
|10.4
|15.6
| 156
| 90
|-
! style="text-align:left;" | GeForce 9600 GT Green Edition
| 2009
| G94b
| TSMC 55 nm
| 196?{{citation needed|date=September 2012}}
| 600<br />625
| 1500<br />1625
| 700/900<br />900{{citation needed|date=September 2012}}
| rowspan="2" | 64:32:16
| rowspan="4" | 512<br />1024
| 44.8/57.6<br />57.6{{citation needed|date=September 2012}}
|9.6<br />10.0
|19.2<br />20.0
| 192<br />208
| 59
| Core Voltage = 1.00v
|-
! style="text-align:left;" | GeForce 9600 GT
| February 21, 2008
| G94-300-A1
| TSMC 65 nm
| 240
| 650
| 1625
| 900
| 57.6
|10.4
|20.8
| 208
| 95
|
|-
! style="text-align:left;" | GeForce 9800 GT Green Edition
| 2009
| G92a2<br />G92b
| TSMC/UMC 65 nm<br />TSMC/UMC 55 nm
| rowspan="4" | 754
| rowspan="2" | 324<br />260
| 550
| 1375
| 700<br />800<br />900
| rowspan="2" | 112:56:16
| 44.8<br />51.2<br />57.6
|8.8
|30.8
| 308
| 75
| Core Voltage = 1.00v
|-
! style="text-align:left;" | GeForce 9800 GT
| July 2008
| G92a<br />G92b
| 65 nm<br /> UMC 55 nm
| 600
| 1500
| 900
| 57.6
|9.6
|33.6
| 336
| 125<br />105
| rowspan="4" |
|-
! style="text-align:left;" | GeForce 9800 GTX
| April 1, 2008
| G92-420-A2
| TSMC 65 nm
| 324
| 675
| 1688
| 1100
| rowspan="2" | 128:64:16
| 512
| rowspan="2" | 70.4
|10.8
|43.2
| 432
| 140
|-
! style="text-align:left;" | GeForce 9800 GTX+
| July 16, 2008
| G92b
| TSMC 55 nm
| 260
| 738
| 1836
| 1100
| 512<br />1024
|11.808
|47.232
| 470
| 141
|-
! style="text-align:left;" | GeForce 9800 GX2
| March 18, 2008
| 2x G92
| TSMC/UMC 65 nm
| 2x 754
| 2x 324
| 600
| 1500
| 1000
| 2x 128:64:16
| 2x 512
| 2x 64.0
| 2x 256
|2x 9.6
|2x 38.4
| 2x 384
| 197
|-
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | [[Semiconductor device fabrication|Fab]] ([[Nanometer|nm]])<ref name="vintage3d" />
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! rowspan="2" | Core config{{efn|name=geforce 9 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | Comments
|-
! colspan="3" | Clock rate
! colspan="4" | Memory
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]]){{efn|name=geforce 9 2|To calculate the processing power see [[Tesla (microarchitecture)#Performance]].}}
! colspan="2" | Supported [[Application programming interface|API]] version
|}
{{notelist}}
====Features====
* Compute Capability: 1.1 has support for Atomic functions, which are used to write thread-safe programs.
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model
! colspan="3" | Features
|-
! Scalable Link Interface (SLI)
! [[PureVideo]] 2 with VP2,<br />BSP Engine, and AES128 Engine
! [[PureVideo]] 3 with VP3,<br />BSP Engine, and AES128 Engine
|
! style="text-align:left;" | GeForce 9300 GE (G98)
| rowspan="7" {{yes}}
| rowspan="2" {{no}}
| rowspan="2" {{yes}}
|-
! style="text-align:left;" | GeForce 9300 GS (G98)
|-
! style="text-align:left;" | GeForce 9400 GT
| rowspan="8" {{yes}}
| rowspan="8" {{no}}
|-
! style="text-align:left;" | GeForce 9500 GT
|-
! style="text-align:left;" | GeForce 9600 GSO
|-
! style="text-align:left;" | GeForce 9600 GT
|-
! style="text-align:left;" | GeForce 9800 GT
|-
! style="text-align:left;" | GeForce 9800 GTX
| rowspan="2" {{yes}}<br />3-way
|-
! style="text-align:left;" | GeForce 9800 GTX+
|-
! style="text-align:left;" | GeForce 9800 GX2
| {{yes}}
|}
===GeForce 100 series===
{{Further|GeForce 100 series|Tesla (microarchitecture)}}
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | [[Semiconductor device fabrication|Fab]] ([[Nanometer|nm]])
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock rate
! rowspan="2" | Core config{{efn|name=geforce 100 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" | Memory configuration
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]]){{efn|name=geforce 100 2|To calculate the processing power see [[Tesla (microarchitecture)#Performance]].}}
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | Comments
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! DRAM type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
|-
! style="text-align:left;" | GeForce G 100
| rowspan="5" | March 10, 2009
| G98
| [[TSMC]] 65 nm
| 210
| 86
| rowspan="5" | PCIe 2.0 x16
| 567
| rowspan="2" | 1400
| 500
| 8:8:4
| rowspan="2" | 512
| 8.0
| rowspan="3" | DDR2
| 64
|2.15
|4.3
| 22.4
| rowspan="5" | 10.0
| rowspan="5" | 3.3
| 35
| rowspan="5" | OEM products
|-
! style="text-align:left;" | GeForce GT 120
| G96b
| rowspan="4" | TSMC 55 nm
| 314
| 121
| rowspan="2" | 500
| 800
| 32:16:8
| 16.0
| 128
|4.4
|8.8
| 89.6
| 50
|-
! style="text-align:left;" | GeForce GT 130
| rowspan="2" | G94b
| rowspan="2" | 505
| rowspan="2" | 196
| 1250
| 500
| 48:24:12
| 1536
| 24.0
| 192
|6
|12
| 120
| 75
|-
! style="text-align:left;" | GeForce GT 140
| 650
| 1625
| 1800
| 64:32:16
| 512 1024
| 57.6
| rowspan="2" | GDDR3
| rowspan="2" | 256
|10.4
|20.8
| 208
| 105
|-
! style="text-align:left;" | GeForce GTS 150
| G92b
| 754
| 260
| 738
| 1836
| 1000
| 128:64:16
| 1024
| 64.0
|11.808
|47.232
| 470
| 141
|}
{{notelist}}
===GeForce 200 series===
{{Further|GeForce 200 series|Tesla (microarchitecture)}}
* All models support Coverage Sample Anti-Aliasing, Angle-Independent Anisotropic Filtering, 240-bit OpenEXR HDR
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | [[Semiconductor device fabrication|Fab]] ([[Nanometer|nm]])
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock rate
! rowspan="2" | Core config{{efn|name=geforce 200 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" | Memory configuration
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]]){{efn|name=geforce 200 2|To calculate the processing power see [[Tesla (microarchitecture)#Performance]].}}
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | Comments
! rowspan="2" | Release Price (USD)
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! DRAM type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
|-
! style="text-align:left;" | GeForce 205
| November 26, 2009
| GT218
| rowspan="2" | [[TSMC]] [[40 nm]]
| rowspan="2" | 260
| rowspan="2" | 57
| PCIe 2.0 x16
| 589
| 1402
| 1
| 8:4:4
| 512
| 8
| DDR2
| 64
|2.356
|2.356
| 22.4
| rowspan="3" | 10.1
| rowspan="15" | 3.3
| 30.5
| OEM only
|-
! style="text-align:left;" | GeForce 210
| rowspan="2" | October 12, 2009
| GT218-325-B1
| PCIe 2.0 x16<br />PCIe x1<br />PCI
| 520<br />589
| 1230<br />1402
| 0.8<br />1–1.6
| 16:8:4
| 512<br />1024
| 4.0<br />8.0<br />12.8
| DDR2<br />DDR3
| 32<br />64
|2.356
|4.712
| 36.4<br />44.9
| 30.5
| rowspan="2" |
|-
! style="text-align:left;" | GeForce GT 220
| GT216-300-A2
| TSMC 40 nm
| 486
| 100
| rowspan="13" | PCIe 2.0 x16
| 615(OEM)<br />625
| 1335(OEM)<br />1360
| 1<br />1.58
| 48:16:8
| 512<br />1024
| 16.0<br />25.3
| DDR2<br />DDR3
| 64<br />128
|5
|10
| 128.2(OEM)<br />130.6
| 58
|-
! rowspan="2" style="text-align:left;" | GeForce GT 230
| October 12, 2009<ref>{{cite web |url=https://www.techpowerup.com/gpudb/617/geforce-gt-230 |access-date=2018-08-06 |title=NVIDIA GeForce GT 230 Specs |archive-date=6 August 2018 |archive-url=https://web.archive.org/web/20180806210800/https://www.techpowerup.com/gpudb/617/geforce-gt-230 |url-status=live}}</ref>
| G94b
| rowspan="2" | TSMC/[[United Microelectronics Corporation|UMC]] 55 nm
| 505
| 196?
| 650
| 1625
| 1.8
| 48:24:16
| 512<br />1024
| 57.6
| GDDR3
| 256
|10.4
|15.6
| 156
| rowspan="2" | 10
| rowspan="2" | 75
| rowspan="2" | OEM only
|-
| April 27, 2009<ref>{{Cite web |date=2023-08-18 |title=Pegatron GT 230 Specs |url=https://www.techpowerup.com/gpu-specs/pegatron-gt-230.b4915 |access-date=2023-08-18 |website=TechPowerUp |language=en}}</ref>
| G92b
| 754
| 260
| 500
| 1242
| 1
| 96:48:12
| 1536
| 24
| DDR2
| 192
|6
|24
| 238.5
|-
! style="text-align:left;" | GeForce GT 240
| November 17, 2009
| GT215-450-A2
| TSMC 40 nm
| 727
| 139
| 550
| 1340
| 1.8<br />2<br />3.4(GDDR5)
| 96:32:8
| 512<br />1024
| 28.8(OEM)<br />32<br />54.4(GDDR5)
| DDR3<br />GDDR3<br />GDDR5
| 128
|4.4
|17.6
| 257.3
| 10.1
| 69
|
|-
! style="text-align:left;" | GeForce GTS 240
| July 1, 2009<ref>{{cite web |url=https://www.techpowerup.com/gpudb/242/geforce-gts-240-oem |title=NVIDIA GeForce GTS 240 OEM {{pipe}} techPowerUp GPU Database |access-date=2018-08-06 |archive-url=https://archive.today/20161212024149/https://www.techpowerup.com/gpudb/242/geforce-gts-240-oem |archive-date=2016-12-12 |url-status=live}}</ref>
| G92a<br />G92b
| TSMC 65 nm<br />TSMC/UMC 55 nm
| rowspan="3" | 754
| 324<br />260
| 675
| 1620
| 2.2
| 112:56:16
| 1024
| 70.4
| rowspan="9" | GDDR3
| rowspan="3" | 256
|10.8
|37.8
| 362.9
| rowspan="9" | 10.0
| 120
| OEM only
|-
! rowspan="2" style="text-align:left;" | GeForce GTS 250
| 2009
| G92b
| TSMC/[[United Microelectronics Corporation|UMC]] 55 nm
| rowspan="2" | 260
| 702
| 1512
| 2
| rowspan="2" | 128:64:16
| 512<br />1024
| 64.0
|11.2
|44.9
| 387
| 130
|
|-
| March 3, 2009
| G92-428-B1
| TSMC 65 nm<br />TSMC/UMC 55 nm
| 738
| 1836
| 2<br />2.2
| 512<br />1024
| 64.0<br />70.4
|11.808
|47.232
| 470
| 150
| Some cards are rebranded GeForce 9800 GTX+
| $150<br />($130 512 MiB)
|-
! rowspan="2" style="text-align:left;" | GeForce GTX 260
| June 16, 2008
| GT200-100-A2
| 65 nm
| rowspan="5" | 1400
| 576
| 576
| 1242
| 1.998
| 192:64:28
| 896
| 111.9
| rowspan="3" | 448
|16.128
|36.864
| 477
| 182
| Replaced by GTX 260 Core 216
| $400 (dropped to $270 after 3 months<ref name="ReferenceC">{{cite web |url=https://www.techpowerup.com/review/zotac-geforce-gtx-260-amp2-edition/ |title=Zotac GeForce GTX 260 Amp<sup>2</sup> Edition 216 Shaders Review |date=16 September 2008 |access-date=2018-08-06 |archive-date=24 December 2019 |archive-url=https://web.archive.org/web/20191224053204/https://www.techpowerup.com/review/zotac-geforce-gtx-260-amp2-edition/ |url-status=live}}</ref>)
|-
| September 16, 2008<br /> November 27, 2008 (55 nm)
| GT200-103-A2<br />G200-103-B2
| 65 nm<br />55 nm
| 576<br />470
| 576
| 1242<br />1350
| 1.998
| 216:72:28
| 896 (1792)
| 111.9
|16.128
|41.472
| 536.5<br />583.2
| 182<br />171
| 55 nm version has less TDP
| $300
|-
! style="text-align:left;" | GeForce GTX 275
| April 9, 2009
| GT200-105-B3
| TSMC/UMC 55 nm
| 470
| 633
| 1404
| 2.268
| 240:80:28
| 896 (1792)
| 127.0
|17.724
|50.6
| 674
| 219
| Effectively one-half of the GTX 295
| $250
|-
! style="text-align:left;" | GeForce GTX 280
| June 17, 2008
| GT200-300-A2
| 65 nm
| 576
| 602
| 1296
| 2.214
| rowspan="2" | 240:80:32
| 1024
| 141.7
| 512
|19.264
|48.16
| 622
| 236
| Replaced by GTX 285
| $650 (dropped to $430 after 3 months<ref name="ReferenceC"/>)
|-
! style="text-align:left;" | GeForce GTX 285
| January 15, 2009
| GT200-350-B3
| rowspan="2" | TSMC/UMC 55 nm
| 470
| 648
| 1476
| 2.484
| 1024 (2048)
| 159.0
| 512
|20.736
|51.84
| 708.48
| 204
| EVGA GTX285 Classified supports 4-way SLI
| $400
|-
! style="text-align:left;" | GeForce GTX 295
| January 8, 2009
| 2x GT200-400-B3
| 2x 1400
| 2x 470
| 576
| 1242
| 1.998
| 2x 240:80:28
| 2x 896
| 2x 111.9
| 2x 448
| 2x 16.128
| 2x 46.08
| 1192.3
| 289
| Dual PCB models were replaced with a single PCB model with 2 GPUs
| $500
|-
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! rowspan="2" | Core config{{efn|name=geforce 200 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! DRAM type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | Comments
! rowspan="2" | Release Price (USD)
|-
! colspan="3" | Clock rate
! colspan="4" | Memory configuration
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]]){{efn|name=geforce 200 2|To calculate the processing power see [[Tesla (microarchitecture)#Performance]].}}
! colspan="2" | Supported API version
|}
{{notelist}}
====Features====
* Compute Capability: 1.1 (G92 [GTS250] GPU)
* Compute Capability: 1.2 (GT215, GT216, GT218 GPUs)
* Compute Capability: 1.3 has [[Double precision floating-point format|double precision]] support for use in [[GPGPU]] applications. (GT200a/b GPUs only)
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model
! colspan="3" | Features
|-
! Scalable Link Interface (SLI)
! [[PureVideo]] 2 with VP2<br />Engine: (BSP and 240 AES)
! [[PureVideo]] 4 with VP4 Engine
|-
! style="text-align:left;" | GeForce 210
| rowspan="3" {{no}}
| rowspan="3" {{no}}
| colspan="3" rowspan="3" {{yes}}
|-
! style="text-align:left;" | GeForce GT 220
|-
! style="text-align:left;" | GeForce GT 240
|-
! style="text-align:left;" | GeForce GTS 250
| rowspan="7" {{yes}}<br />3-Way (4-way for EVGA 285 Classified)
| rowspan="8" {{yes}}
| rowspan="8" {{no}}
|-
! style="text-align:left;" | GeForce GTX 260
|-
! style="text-align:left;" | GeForce GTX 260 Core 216
|-
! style="text-align:left;" | GeForce GTX 260 Core 216 (55 nm)
|-
! style="text-align:left;" | GeForce GTX 275
|-
! style="text-align:left;" | GeForce GTX 280
|-
! style="text-align:left;" | GeForce GTX 285
|-
! style="text-align:left;" | GeForce GTX 295
| {{yes}}
|}
===GeForce 300 series===
{{Further|GeForce 300 series|Tesla (microarchitecture)}}
* All models support the following [[Application programming interface|API]] levels: [[Direct3D]] 10.1 and [[OpenGL]] 3.3
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | [[Semiconductor device fabrication|Fab]] ([[Nanometer|nm]])
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock rate
! rowspan="2" | Core config{{efn|name=geforce 300 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" | Memory configuration
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]]){{efn|name=geforce 300 2|To calculate the processing power see [[Tesla (microarchitecture)#Performance]].}}
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | Comments
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! DRAM type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
|-
! style="text-align:left;" | GeForce 310
| November 27, 2009
| GT218
| rowspan="7" |[[TSMC]] 40 nm
| 260
| 57
| rowspan="7" | PCIe 2.0 x16
| 589
| 1402
| 1000
| 16:8:4
| 512
| 8
| DDR2
| rowspan="2" | 64
| 2.356
| 4.712
| 44.8
| 30.5
| OEM Card, similar to Geforce 210
|-
! style="text-align:left;" | GeForce 315
| rowspan="6" | February 2010
| GT216
| 486
| 100
| 475
| 1100
| rowspan="2" | 1580
| 48:16:4
| 512
| 12.6
| DDR3
| 3.8
| 7.6
| 105.6
| 33
| OEM Card, similar to Geforce GT220
|-
! style="text-align:left;" | GeForce GT 320
| GT215
| rowspan="5" | 727
| rowspan="5" | 144
| 540
| 1302
| 72:24:8
| 1024
| 25.3
| rowspan="3" | GDDR3
| 128
| 4.32
| 12.96
| 187.5
| 43
| OEM Card
|-
! rowspan="3" style="text-align:left;" | GeForce GT 330<ref>{{cite web |url=http://www.techpowerup.com/gpudb/1758/geforce-gt-330-oem.html |title=Nvidia GeForce GT 330 OEM {{pipe}} techPowerUp GPU Database |website=Techpowerup.com |access-date=2015-12-11 |archive-url=https://archive.today/20141018032011/http://www.techpowerup.com/gpudb/1758/geforce-gt-330-oem.html |archive-date=2014-10-18 |url-status=live}}</ref>
| GT215-301-A3<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-gt-330-oem.c1758 |title=NVIDIA GeForce GT 330 OEM Specs |website=TechPowerUp |language=en |access-date=2020-03-23}}</ref>
| 550
| 1350
|
| rowspan="2" | 96:32:8
| 512
| 32.00
| 128
| 4.40
| 17.60
| 257.3
| rowspan="3" | 75
| rowspan="3" | Specifications vary depending on OEM, similar to GT230 v2.
|-
| G92<ref>{{cite web |url=https://www.techpowerup.com/gpu-specs/geforce-gt-330-oem.c3314 |title=NVIDIA GeForce GT 330 OEM Specs |website=TechPowerUp |language=en |access-date=2020-03-23}}</ref>
| rowspan="2" |500
| rowspan="2" |1250
|
| 256
| 51.20
| 256
| 4.000
| rowspan="2" |24.00
| rowspan="2" |240.0
|-
| G92B<ref>{{cite web |url=https://www.techpowerup.com/gpu-specs/geforce-gt-330-oem.c1757 |title=NVIDIA GeForce GT 330 OEM Specs |website=TechPowerUp |language=en |access-date=2020-03-23}}</ref>
|
| 96:32:16
| 1024
| 16.32
| DDR2
| 128
| 8.000
|-
! style="text-align:left;" | GeForce GT 340
| GT215
| 550
| 1340
| 3400
| 96:32:8
| 512<br />1024
| 54.4
| GDDR5<ref>{{cite web |url=http://www.nvidia.com/object/product_geforce_gt_340_us.html |title=GeForce GT 340 OEM {{pipe}} GeForce |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20120228151256/http://www.nvidia.com/object/product_geforce_gt_340_us.html |archive-date=2012-02-28 |url-status=live}}</ref>
| 128
|
|
| 257.3
| 69
| OEM Card, similar to GT240
|}
{{notelist}}
===GeForce 400 series===
{{Further|GeForce 400 series|Fermi (microarchitecture)}}
* All cards have a PCIe 2.0 x16 [[Computer bus|Bus]] [[I/O interface|interface]].
* The base requirement for Vulkan 1.0 in terms of hardware features was OpenGL ES 3.1 which is a subset of OpenGL 4.3, which is supported on all Fermi and newer cards.
* Memory bandwidths stated in the following table refer to Nvidia reference designs. Actual bandwidth can be higher or ''lower'' depending on the maker of the graphic board.
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | [[Semiconductor device fabrication|Fab]] ([[Nanometer|nm]])
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! colspan="3" | Clock rate
! rowspan="2" | SM count
! rowspan="2" | Core config{{efn|name=geforce 400 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}{{efn|name=geforce 400 3|Each SM in the GF100 contains 4 texture filtering units for every texture address unit. The complete GF100 die contains 64 texture address units and 256 texture filtering units.<ref name="anandtech.com">{{cite web |url=http://anandtech.com/show/2977/nvidia-s-geforce-gtx-480-and-gtx-470-6-months-late-was-it-worth-the-wait-/3 |title=The GF100 Recap - Nvidia's GeForce GTX 480 and GTX 470: 6 Months Late, Was It Worth the Wait? |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20110805164512/http://www.anandtech.com/show/2977/nvidia-s-geforce-gtx-480-and-gtx-470-6-months-late-was-it-worth-the-wait-/3 |archive-date=2011-08-05 |url-status=dead}}</ref> Each SM in the GF104/106/108 architecture contains 8 texture filtering units for every texture address unit but has doubled both addressing and filtering units. The complete GF104 die also contains 64 texture address units and 512 texture filtering units despite the halved SM count, the complete GF106 die contains 32 texture address units and 256 texture filtering units and the complete GF108 die contains 16 texture address units and 128 texture filtering units.<ref>{{cite web |url=http://www.anandtech.com/show/3809/nvidias-geforce-gtx-460-the-200-king/2 |title=GF104: Nvidia Goes Superscalar - Nvidia's GeForce GTX 460: The $200 King |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222100647/http://www.anandtech.com/show/3809/nvidias-geforce-gtx-460-the-200-king/2 |archive-date=2015-12-22 |url-status=dead}}</ref>}}
! colspan="4" | Memory configuration
! colspan="2" |[[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]]){{efn|name=geforce 400 2|To calculate the processing power see [[Fermi (microarchitecture)#Performance]].}}
! colspan="5" |Supported [[Application programming interface|API]] version
! rowspan="2" | [[Thermal design power|TDP]] (Watts){{efn|name=geforce 400 4|Note that while GTX 460's TDP is comparable to that of AMD's HD5000 series, GF100-based cards (GTX 480/470/465) are rated much lower but pull significantly more power, e.g. GTX 480 with 250W TDP consumes More power than an HD 5970 with 297W TDP.<ref>{{cite web |url=http://www.tomshardware.com/reviews/geforce-gtx-480,2585-15.html |title=GeForce GTX 480 And 470: From Fermi And GF100 To Actual Cards! |website=Tomshardware.com |date=27 March 2010 |access-date=2015-12-11 |archive-date=11 September 2013 |archive-url=https://web.archive.org/web/20130911085858/http://www.tomshardware.com/reviews/geforce-gtx-480,2585-15.html |url-status=live}}</ref>}}
! rowspan="2" | Release Price (USD)
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! DRAM type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Vulkan (API)|Vulkan]]
! [[Direct3D]]
! [[OpenGL]]
! [[OpenCL]]{{efn|name=geforce 400 6|The 400 series is the only non-OEM family from GeForce 9 to 700 series not to include an official dual-GPU system. However, on March 18, 2011, [[EVGA Corporation|EVGA]] released the first single-PCB card with dual 460s on board. The card came with 2048 MiB of memory at 3600 MHz and 672 shader processors at 1400 MHz and was offered at the MSRP of $429.}}
! [[CUDA]]
|-
! style="text-align:left;" | GeForce 405{{efn|name=geforce 400 7|The GeForce 405 card is a rebranded GeForce 310 which itself is a rebranded GeForce 210.}}
| September 16, 2011
| GT216<br />GT218
| [[40 nm]]
| 486<br />260
| 100<br />57
| 475<br />589
| 1100<br />1402
| 800<br />790
| rowspan="2" | 1
| 48:16:8<br />16:8:4
| 0.5<br />1
| 12.6
| DDR3
| 64
| 3.8<br />2.36
| 7.6<br />4.71
| 105.6<br />44.86
| {{unk}}
| rowspan="17" |n/a<ref name="vulkandrv" />
| 10.1
| 3.3
| rowspan="2" |1.1
| 1.2
| 30.5
| rowspan="3" | OEM
|-
! style="text-align:left;" | GeForce GT 420
| September 3, 2010
| GF108
| rowspan="16" | [[TSMC]] 40 nm
| rowspan="5" | 585
| rowspan="5" | 116
| rowspan="4" | 700
| rowspan="4" | 1400
| 1800
| 48:4:4
| 0.5
| 28.8
| rowspan="4" | GDDR3
| rowspan="2" | 128
| rowspan="4" |2.8
| 2.8
| 134.4
| {{unk}}
| rowspan="16" |12 FL 11_1
| rowspan="16" |4.6
| rowspan="13" |2.1
| 50
|-
! rowspan="3" style="text-align:left;" | GeForce GT 430
| rowspan="3" | October 11, 2010
| rowspan="3" | GF108<br />GF108-300-A1
| 1600<br />1800
| rowspan="4" | 2
| rowspan="4" | 96:16:4
| 0.5
| 25.6<br />28.8
| rowspan="3" |11.2
| 268.8
| {{unk}}
| 1.2
| 60
|-
| 1800
| rowspan="2" | 0.5<br />1<br />2
| 28.8
| 128
| 268.8
| rowspan="2" | Unknown
| rowspan="11" |1.1
| 49
| rowspan="2" | $79
|-
| 1300
| 10.4
| 64
|
|
|-
! rowspan="2" style="text-align:left;" | GeForce GT 440
| February 1, 2011
| GF108
| 810
| 1620
| 1800<br />3200
| 0.5<br />1
| 28.8<br />51.2
| GDDR3<br />GDDR5
| 128
| 3.2
| 12.9
| 311.04
| {{unk}}
| 65
| $100
|-
| rowspan="2" | October 11, 2010
| rowspan="2" | GF106
| rowspan="3" | 1170
| rowspan="3" | 238
| 594
| 1189
| 1600<br />1800
| rowspan="2" | 3
| rowspan="2" | 144:24:24
| 1.5<br />3
| 43.2
| DDR3
| rowspan="2" | 192
| 4.86
| 19.44
| 342.43
| {{unk}}
| 56
| rowspan="2" | OEM
|-
! rowspan="2" style="text-align:left;" | GeForce GTS 450
| 790
| 1580
| 4000
| 1.5
| 96.0
| rowspan="10" | GDDR5
| 4.7
| 18.9
| 455.04
| {{unk}}
| 106
|-
| September 13, 2010<br />March 15, 2011
| GF106-250<br />GF116-200
| 783
| 1566
| 1200-1600 (GDDR3) <br /> 3608 (GDDR5)
| 4
| 192:32:16
| 0.5<br />1
| 57.7
| 128
| 6.2
| 25.0
| 601.34
| {{unk}}
| 106
| $129
|-
! style="text-align:left;" | GeForce GTX 460 SE
| November 15, 2010
| GF104-225-A1
| rowspan="5" | 1950
| rowspan="5" | 332
| rowspan="2" | 650
| rowspan="2" | 1300
| rowspan="2" | 3400
| 6
| 288:48:32
| 1
| 108.8
| rowspan="2" | 256
| 7.8
| 31.2
| 748.8
| {{unk}}
| 150
| $160
|-
! rowspan="4" style="text-align:left;" | GeForce GTX 460
| October 11, 2010
| GF104
| rowspan="4" | 7
| 336:56:32
| 1
| 108.8
| 9.1
| 36.4
| 873.6
| {{unk}}
|
| OEM
|-
| rowspan="2" | July 12, 2010
| rowspan="2" | GF104-300-KB-A1
| rowspan="2" | 675
| rowspan="2" | 1350
| rowspan="2" | 3600
| 336:56:24
| 0.75
| 86.4
| 192
| rowspan="2" |9.4
| rowspan="2" |37.8
| 907.2
| rowspan="2" | Unknown
|
| $199
|-
| 336:56:32
| 1<br />2
| 115.2
| 256
|
| 160
| $229
|-
| September 24, 2011
| GF114
| 779
| 1557
| 4008
| 336:56:24
| 1
| 96.2
| 192
| 10.9
| 43.6
| 1045.6
| {{unk}}
|
| $199
|-
! style="text-align:left;" | GeForce GTX 465
| May 31, 2010
| GF100-030-A3
| rowspan="3" | 3000<ref>{{cite web |url= http://www.nvidia.com/content/PDF/fermi_white_papers/NVIDIA_Fermi_Compute_Architecture_Whitepaper.pdf |title= Nvidia Fermi Compute Architecture Whitepaper |access-date= 2010-04-17 |archive-url= https://web.archive.org/web/20091122092429/http://www.nvidia.com/content/PDF/fermi_white_papers/NVIDIA_Fermi_Compute_Architecture_Whitepaper.pdf |archive-date= 2009-11-22 |url-status= live}} {{small|( 855KB)}}, page 11 of 22</ref>
| rowspan="3" | 529
| rowspan="2" | 608
| rowspan="2" | 1215
| 3206
| 11
| 352:44:32
| 1
| 102.7
| 256
| 13.3
| 26.7
| 855.36
| 106.92
| rowspan="3" |1.2
| rowspan="3" |2.0
| 200{{efn|name=geforce 400 4}}
| $279
|-
! style="text-align:left;" | GeForce GTX 470
| March 26, 2010
| GF100-275-A3
| 3348
| 14
| 448:56:40
| 1.25
| 133.9
| 320
| 17.0
| 34.0
| 1088.64
| 136.08
| 215{{efn|name=geforce 400 4}}
| $349
|-
! style="text-align:left;" | GeForce GTX 480
| March 26, 2010
| GF100-375-A3
| 701
| 1401
| 3696
| 15
| 480:60:48
| 1.5
| 177.4
| 384
| 21.0
| 42.0
| 1344.96
| 168.12
| 250{{efn|name=geforce 400 4}}
| $499
|-
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! colspan="3" | Clock rate
! rowspan="2" | SM count
! rowspan="2" | Core config{{efn|name=geforce 400 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}{{efn|name=geforce 400 3}}
! colspan="4" | Memory configuration
! colspan="2" |[[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]]){{efn|name=geforce 400 2|To calculate the processing power see [[Fermi (microarchitecture)#Performance]].}}
! colspan="5" |Supported [[Application programming interface|API]] version
! rowspan="2" | [[Thermal design power|TDP]] (Watts){{efn|name=geforce 400 4}}
! rowspan="2" | Release Price (USD)
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! DRAM type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Vulkan (API)|Vulkan]]
! [[Direct3D]]
! [[OpenGL]]
! [[OpenCL]]{{efn|name=geforce 400 6|The 400 series is the only non-OEM family from GeForce 9 to 700 series not to include an official dual-GPU system. However, on March 18, 2011, [[EVGA Corporation|EVGA]] released the first single-PCB card with dual 460s on board. The card came with 2048 MiB of memory at 3600 MHz and 672 shader processors at 1400 MHz and was offered at the MSRP of $429.}}
! [[CUDA]]
|}
{{notelist}}
===GeForce 500 series===
{{Further|GeForce 500 series|Fermi (microarchitecture)}}
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model{{spaces}}name
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock rate
! rowspan="2" | SM count
! rowspan="2" | Core config {{efn|name=geforce 500 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}{{efn|name=geforce 500 3|Each SM in the GF110 contains 4 texture filtering units for every texture address unit. The complete GF110 die contains 64 texture address units and 256 texture filtering units.<ref>{{cite web |url=http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580/2 |title=GF110: Fermi Learns Some New Tricks - Nvidia's GeForce GTX 580: Fermi Refined |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20160113203734/http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580/2 |archive-date=2016-01-13 |url-status=dead}}</ref> Each SM in the GF114/116/118 architecture contains 8 texture filtering units for every texture address unit but has doubled both addressing and filtering units.}}
! colspan="4" | Memory configuration
! colspan="2" | [[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]]) {{efn|name=geforce 500 2|To calculate the processing power see [[Fermi (microarchitecture)#Performance]].}}
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan="2" | [[Thermal design power|TDP]] (Watts) {{efn|name=geforce 500 6|Similar to previous generation, GTX 580 and most likely future GTX 570{{Update inline|date=April 2021}}, while reflecting its improvement over GF100, still have lower rated TDP and higher power consumption, e.g. GTX580 (243W TDP) is slightly less power hungry than GTX 480 (250W TDP). This is managed by clock throttling through drivers when a dedicated power hungry application is identified that could breach card TDP. Application name changing will disable throttling and enable full power consumption, which in some cases could be close to that of GTX480.<ref>{{cite web |url=http://www.anandtech.com/Show/Index/4008?cPage=14&all=False&sort=0&page=17&slug=nvidias-geforce-gtx-580 |title=Power, Temperature, and Noise - Nvidia's GeForce GTX 580: Fermi Refined |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20160113203734/http://www.anandtech.com/Show/Index/4008?cPage=14&all=False&sort=0&page=17&slug=nvidias-geforce-gtx-580 |archive-date=2016-01-13 |url-status=dead}}</ref>}}
! rowspan="2" | Release Price (USD)
|-
! Core ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! DRAM type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Vulkan (API)|Vulkan]]
! [[Direct3D]]
! [[OpenGL]]
! [[OpenCL]] <sup>8</sup>
! [[CUDA]]
|-
! style="text-align:left;" | GeForce 510
| September 29, 2011
| rowspan="2" | GF119
| rowspan="15" |[[TSMC]]<br />[[40 nm]]
| rowspan="2" | 292
| rowspan="2" | 79
| PCIe 2.0 x16
| 523
| 1046
| rowspan="4" | 1800
| rowspan="2" | 1
| rowspan="2" | 48:8:4
| rowspan="3" | 1<br />2
| 14.4
| rowspan="4" | DDR3
| rowspan="2" | 64
| 2.1
| 4.5
| 100.4
| {{unk}}
| rowspan="15" |n/a<ref name="vulkandrv" />
| rowspan="15" |12 FL 11_1
| rowspan="15" |4.6
| rowspan="15" |1.1
| rowspan="12" |2.1
| 25
| OEM
|-
! style="text-align:left;" | GeForce GT{{spaces}}520
| April 12, 2011
| PCIe 2.0 x16<br />PCIe 2.0 x1<br />PCI
| 810
| 1620
| 14.4
| 3.25
| 6.5
| 155.5
| {{unk}}
| 29
| $59
|-
! style="text-align:left;" | GeForce GT{{spaces}}530<ref>{{cite web|title=NVIDIA GeForce GT 530 OEM Specs|url=https://www.techpowerup.com/gpu-specs/geforce-gt-530-oem.c630|access-date=September 25, 2022|website=TechPowerUp}}</ref>
| rowspan="3" | May 14, 2011
| GF108-220
| 585
| 116
| rowspan="13" | PCIe 2.0 x16
| 700
|
| 2
| 96:16:4
| 28.8
| 128
| 2.8
| 11.2
| 268.8
| 22.40
| 50
| OEM
|-
! rowspan="2" style="text-align:left;" | GeForce GT{{spaces}}545
| rowspan="2" | GF116
| rowspan="3" | ~1170
| rowspan="3" | ~238
| 720
| 1440
| rowspan="2" | 3
| rowspan="2" | 144:24:16
| 1.5<br />3
| 43
| 192
| 11.52
| 17.28
| 415.07
| {{unk}}
| 70
| $149
|-
| 870
| 1740
| 3996
| 1
| 64
| rowspan="11" | GDDR5
| 128
| 13.92
| 20.88
| 501.12
| {{unk}}
| 105
| OEM
|-
! style="text-align:left;" | GeForce GTX{{spaces}}550 Ti
| March 15, 2011
| GF116-400
| 900
| 1800
|
|
| 192:32:24
| 0.75+0.25<br />1.5
| 65.7+32.8<br />98.5
| 128+64 {{efn|name=geforce 500 9|1024 MiB RAM on 192-bit bus assemble with 4 x (128 MiB) + 2 x (256 MiB).}}<br />192
| 21.6
| 28.8
| 691.2
| {{unk}}
| 116
| $149
|-
! style="text-align:left;" | GeForce GTX 555
| May 14, 2011
| GF114
| rowspan="4" | 1950
| rowspan="4" | 332
| rowspan="2" | 736
| rowspan="2" | 1472
| rowspan="2" | 3828
| rowspan="2" | 6
| rowspan="2" | 288:48:24
| rowspan="2" | 1
| rowspan="2" | 91.9
| rowspan="2" | 128+64 {{efn|name=geforce 500 9|1024 MiB RAM on 192-bit bus assemble with 4 x (128 MiB) + 2 x (256 MiB).}}
| rowspan="2" |17.6
| rowspan="2" |35.3
| rowspan="2" |847.9
| {{unk}}
| rowspan="3" |150
| rowspan="2" | OEM
|-
! style="text-align:left;" | GeForce GTX 560 SE
| February 20, 2012<ref>{{Cite web |date=2023-08-18 |title=NVIDIA GeForce GTX 560 SE Specs |url=https://www.techpowerup.com/gpu-specs/geforce-gtx-560-se.c341 |access-date=2023-08-18 |website=TechPowerUp |language=en}}</ref>
| GF114-200-KB-A1{{efn|name=geforce 500 4|Internally referred to as GF104B<ref name="gpu-tech1">{{cite web |url=http://www.gpu-tech.org/content.php/144-%E2%80%A6and-GF110s-real-name-is-GF100B-%28and-who-guesses-what-GF114-is-%29 |title=...and GF110s real name is: GF100B |website=GPU-Tech.org |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20160113203736/http://www.gpu-tech.org/content.php/144-%E2%80%A6and-GF110s-real-name-is-GF100B-%28and-who-guesses-what-GF114-is-%29 |archive-date=2016-01-13 |url-status=live}}</ref>}}
| {{unk}}
|
|-
! style="text-align:left;" | GeForce GTX 560
| May 17, 2011
| GF114-325-A1 {{efn|name=geforce 500 4}}
| 810
| 1620
| rowspan="2" | 4008
| 7
| 336:56:32
| rowspan="2" | 1 2
| 128.1
| rowspan="2" | 256
| 25.92
| 45.36
| 1088.6
| {{unk}}
| $199
|-
! rowspan="2" style="text-align:left;" | GeForce GTX 560 Ti
| January 25, 2011
| GF114-400-A1 {{efn|name=geforce 500 4}}
| 822
| 1645
| 8
| 384:64:32
| 128.26
| 26.3
| 52.61
| 1263.4
| 110
| 170
| $249
|-
| May 30, 2011
| GF110 {{efn|name=geforce 500 5|Internally referred to as GF100B<ref name="gpu-tech1" />}}
| rowspan="4" | 3000<ref name="GTX580-anandtech">{{cite web |url=http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580 |title=Nvidia's GeForce GTX 580: Fermi Refined |date=November 9, 2010 |publisher=[[AnandTech]] |author=Ryan Smith |access-date=November 9, 2010 |archive-url=https://web.archive.org/web/20101110202636/http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580 |archive-date=November 10, 2010 |url-status=dead}}</ref>
| rowspan="4" | 520<ref name="GTX580-anandtech" />
| rowspan="3" | 732
| rowspan="3" | 1464
| rowspan="3" | 3800 <!--according to Field explanations-->
| 11
| 352:44:40
| 1.25<br />2.5
| rowspan="3" | 152
| rowspan="3" | 320
| rowspan="3" |29.28
| 32.21
| 1030.7
| 128.83
| rowspan="2" | 210 {{efn|name=geforce 500 6}}
| OEM
|-
! style="text-align:left;" | GeForce GTX 560 Ti (448 Cores)
| November 29, 2011
| GF110-270-A1 {{efn|name=geforce 500 5}}
| 14
| 448:56:40
| 1.25
| 40.99
| 1311.7
| 163.97
| $289
|-
! style="text-align:left;" | GeForce GTX 570
| December 7, 2010
| GF110-275-A1 {{efn|name=geforce 500 5}}
| 15
| 480:60:40
| 1.25 2.5
| 43.92
| 1405.4
| 175.68
| rowspan="3" |2.0
| 219 {{efn|name=geforce 500 6}}
| $349
|-
! style="text-align:left;" | GeForce GTX 580
| November 9, 2010
| GF110-375-A1 {{efn|name=geforce 500 5}}
| 772
| 1544
| 4008
| 16
| 512:64:48
| 1.5<br />3 {{efn|name=geforce 500 7|Some companies have announced that they will be offering the GTX580 with 3GB RAM.<ref>{{cite web |url=http://www.evga.com/products/moreInfo.asp?pn=03G-P3-1584-AR&family=GeForce+500+Series+Family&sw |title=Products - Featured Products |publisher=EVGA |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20120324052045/http://www.evga.com/products/moreInfo.asp?pn=03G-P3-1584-AR&family=GeForce+500+Series+Family&sw |archive-date=2012-03-24 |url-status=live}}</ref>}}
| 192.384
| 384
| 37.05
| 49.41
| 1581.1
| 197.63
| 244 {{efn|name=geforce 500 6}}<ref>{{cite web |url=http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/3.html |title=Nvidia GeForce GTX 580 1536 MB Review |website=TechPowerUp |date=2010-11-09 |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222085437/http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/3.html |archive-date=2015-12-22 |url-status=live}}</ref>
| $499
|-
! style="text-align:left;" | GeForce GTX 590
| March 24, 2011
| 2x GF110-351-A1
| 2x 3000
| 2x 520
| 607
| 1215
| 3414
| 2x 16
| 2x 512:64:48
| 2x 1.5
| 2x 163.87
| 2x 384
| 2x 29.14
| 2x 38.85
| 2488.3
| 311.04
| 365
| $699
|-
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock rate
! rowspan="2" | SM count
! rowspan="2" | Core config {{efn|name=geforce 500 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}{{efn|name=geforce 500 3}}
! colspan="4" | Memory configuration
! colspan="2" |[[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]]) {{efn|name=geforce 500 2|To calculate the processing power see [[Fermi (microarchitecture)#Performance]].}}
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan="2" | [[Thermal design power|TDP]] (Watts) {{efn|name=geforce 500 6}}
! rowspan="2" | Release Price (USD)
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! DRAM type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Vulkan (API)|Vulkan]]
! [[Direct3D]]
! [[OpenGL]]
! [[OpenCL]] <sup>8</sup>
! [[CUDA]]
|}
{{notelist}}
===GeForce 600 series===
{{Further|GeForce 600 series|Kepler (microarchitecture)}}
* Add [[Nvidia NVENC|NVENC]] on GTX cards
* Several 600 series cards are rebranded 400 or 500 series cards.
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model{{spaces}}name
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="5" | Clock rate
! rowspan="2" | SM count
! rowspan="2" | Core config {{efn|name=geforce 600 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" | Memory configuration
! colspan="2" | [[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]]) {{efn|name=geforce 600 10|To calculate the processing power see [[Kepler (microarchitecture)#Performance]], or [[Fermi (microarchitecture)#Performance]].}}
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | Release Price (USD)
|-
! Core ([[Hertz|MHz]])
! Average Boost ([[Hertz|MHz]])
! Max Boost ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! DRAM type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Vulkan (API)|Vulkan]] {{efn|name=geforce 600 11|Vulkan 1.2 is only supported on Kepler cards.<ref name="vulkandrv" />}}
! [[Direct3D]]
! [[OpenGL]]
! [[OpenCL]]
|-
! style="text-align:left;" | GeForce 605 {{efn|name=geforce 600 2|The GeForce 605 (OEM) card is a rebranded GeForce 510.}}
| April 3, 2012
| GF119
| rowspan="5" |[[TSMC]]<br />[[40 nm]]
| rowspan="3" | 292
| rowspan="3" | 79
| PCIe 2.0 x16
| 523
| {{N/a}}
| {{N/a}}
| 1046
| 898<br />(1796)
| rowspan="3" | 1
| 48:8:4
| 0.5 1
| 14.4
| rowspan="7" | DDR3
| rowspan="5" | 64
| 2.09
| 4.2
| 100.4
| {{unk}}
| rowspan="5" {{N/a}}
| rowspan="27" | 12
| rowspan="27" | 4.6
| rowspan="27" | 1.2
| 25
| OEM
|-
! style="text-align:left;" | GeForce GT{{spaces}}610 {{efn|name=geforce 600 3|The GeForce GT 610 card is a rebranded GeForce GT 520.}}
| May 15, 2012
| GF119-300-A1
| PCIe 2.0 x16, PCIe x1, PCI
| rowspan="2" | 810
| {{N/a}}
| {{N/a}}
| rowspan="2" | 1620
| 1000<br />1800
| 48:8:4
| 0.5<br />1<br />2
| 8<br />14.4
| rowspan="2" |3.24
| 6.5
| 155.5
| {{unk}}
| 29
| Retail
|-
! rowspan="2" style="text-align:left;" | GeForce GT{{spaces}}620 {{efn|name=geforce 600 4|The GeForce GT 620 (OEM) card is a rebranded GeForce GT 520.}}
| April 3, 2012
| GF119
| rowspan="3" | PCIe 2.0 x16
| {{N/a}}
| {{N/a}}
| 898<br />(1796)
| 48:8:4
| 0.5<br />1
| 14.4
| 6.5
| 155.5
| {{unk}}
| 30
| OEM
|-
| May 15, 2012
| GF108-100-KB-A1
| 585
| 116
| 700
| {{N/a}}
| {{N/a}}
| 1400
| 1000–1800
| 2
| 96:16:4
| 1<br />2
| 8–14.4
| 2.8
| 11.2
| 268.8
| {{unk}}
| 49
| Retail
|-
! style="text-align:left;" | GeForce GT{{spaces}}625
| February 19, 2013
| GF119
| 292
| 79
| 810
| {{N/a}}
| {{N/a}}
| 1620
| 898<br />(1796)
| rowspan="2" | 1
| 48:8:4
| 0.5 1
| 14.4
| 3.24
| 6.5
| 155.5
| {{unk}}
| 30
| rowspan="2" | OEM
|-
! rowspan="4" style="text-align:left;" | GeForce GT{{spaces}}630 {{efn|name=geforce 600 6|The GeForce GT 630 (DDR3, 128-bit, retail) card is a rebranded GeForce GT 430 (DDR3, 128-bit).}}{{efn|name=geforce 600 7|The GeForce GT 630 (GDDR5) card is a rebranded GeForce GT 440 (GDDR5).}}
| April 24, 2012
| GK107
| TSMC [[28 nm]]
| 1300
| 118
| PCIe 3.0 x16
| 875
| {{N/a}}
| {{N/a}}
| 875
| 891<br />(1782)
| 192:16:16
| 1<br />2
| 28.5
| rowspan="3" | 128
| 14
| 14
| 336
| 14
| 1.2
| 50
|-
| rowspan="2" | May 15, 2012
| GF108-400-A1
| rowspan="2" | TSMC<br />40 nm
| rowspan="2" | 585
| rowspan="2" | 116
| rowspan="2" | PCIe 2.0 x16
| 700
| {{N/a}}
| {{N/a}}
| 1620
| 1600–1800
| rowspan="2" | 2
| 96:16:4
| 1<br />2<br />4
| 25.6–28.8
| 2.8
| 11.2
| 311
| {{unk}}
| rowspan="2" {{N/a}}
| 49
| rowspan="2" | Retail
|-
| GF108
| 810
| {{N/a}}
| {{N/a}}
| 1620
| 800<br />(3200)
| 96:16:4
| 1
| 51.2
| GDDR5
| 3.2
| 13
| 311
| {{unk}}
| 65
|-
| May 29, 2013
| GK208-301-A1
| rowspan="2" | TSMC<br />28 nm
| rowspan="2" | 1020
| rowspan="2" | 79
| PCIe 2.0 x8
| 902
| {{N/a}}
| {{N/a}}
| 902
| 900<br />(1800)
| rowspan="2" | 1
| 384:16:8
| rowspan="2" | 1<br />2
| 14.4
| rowspan="5" | DDR3
| rowspan="2" | 64
| 7.22
| 14.44
| 692.7
| {{unk}}
| rowspan="2" | 1.2
| 25
|
|-
! style="text-align:left;" | GeForce GT{{spaces}}635
| February 19, 2013
| GK208
| PCIe 3.0 x8
| 967
| {{N/a}}
| {{N/a}}
| 967
| 1001<br />(2002)
| 384:16:8
| 16
| 7.74
| 15.5
| 742.7
| {{unk}}
| 35
| rowspan="3" | OEM
|-
! rowspan="5" style="text-align:left;" | GeForce GT{{spaces}}640 {{efn|name=geforce 600 8|The GeForce GT 640 (OEM) GF116 card is a rebranded GeForce GT 545 (DDR3).}}
| rowspan="2" | April 24, 2012
| GF116
| TSMC<br />40 nm
| 1170
| 238
| PCIe 2.0 x16
| 720
| {{N/a}}
| {{N/a}}
| 1440
| 891<br />(1782)
| 3
| 144:24:24
| 1.5<br />3
| 42.8
| 192
| 17.3
| 17.3
| 414.7
| {{unk}}
| {{N/a}}
| 75
|-
| rowspan="3" | GK107
| rowspan="3" | TSMC<br />28 nm
| rowspan="3" | 1300
| rowspan="3" | 118
| rowspan="3" | PCIe 3.0 x16
| 797
| {{N/a}}
| {{N/a}}
| 797
| 891<br />(1782)
| rowspan="4" | 2
| rowspan="3" | 384:32:16
| 1<br />2
| 28.5
| rowspan="3" | 128
| 12.8
| 25.5
| 612.1
| 25.50
| rowspan="4" | 1.2
| 50
|-
| June 5, 2012
| 900
| {{N/a}}
| {{N/a}}
| 900
| 891<br />(1782)
| 2<br />4
| 28.5
| 14.4
| 28.8
| 691.2
| 28.8
| 65
| $100
|-
| April 24, 2012
| 950
| {{N/a}}
| {{N/a}}
| 950
| 1250<br />(5000)
| 1<br />2
| 80
| rowspan="14" | GDDR5
| 15.2
| 30.4
| 729.6
| 30.40
| 75
| OEM
|-
| May 29, 2013
| GK208-400-A1
| TSMC<br />28 nm
| 1020
| 79
| PCIe 2.0 x8
| 1046
| {{N/a}}
| {{N/a}}
| 1046
| 1252<br />(5008)
| 384:16:8
| rowspan="3" | 1
| 40.1
| 64
|
|
| 803.3
| {{unk}}
| 49
|
|-
! style="text-align:left;" | GeForce GT{{spaces}}645 {{efn|name=geforce 600 9|The GeForce GT 645 (OEM) card is a rebranded GeForce GTX 560 SE.}}
| April 24, 2012
| GF114-400-A1
| TSMC<br />40 nm
| 1950
| 332
| PCIe 2.0 x16
| 776
| {{N/a}}
| {{N/a}}
| 1552
| 1914
| 6
| 288:48:24
| 91.9
| 192
| 18.6
| 37.3
| 894
| {{unk}}
| {{N/a}}
| 140
| rowspan="2" | OEM
|-
! style="text-align:left;" | GeForce GTX{{spaces}}645
| April 22, 2013
| GK106
| rowspan="11" | TSMC<br />28 nm
| 2540
| 221
| rowspan="11" | PCIe 3.0 x16
| 823.5
| 888.5
| {{N/a}}
| 823
| 1000<br />(4000)
| 3
| 576:48:16
| 64
| rowspan="4" | 128
| 14.16
| 39.5
| 948.1
| 39.53
| rowspan="11" | 1.2
| rowspan="2" | 64
|-
! rowspan="2" style="text-align:left;" | GeForce GTX 650
| September 13, 2012
| GK107-450-A2
| 1300
| 118
| rowspan="2" | 1058
| {{N/a}}
| {{N/a}}
| rowspan="2" | 1058
| rowspan="2" | 1250<br />(5000)
| rowspan="2" | 2
| rowspan="2" | 384:32:16
| rowspan="4" | 1<br />2
| rowspan="2" | 80
| rowspan="2" |16.9
| rowspan="2" |33.8
| 812.54
| rowspan="2" | 33.86
| $110
|-
| November 27, 2013<ref>{{cite web|title=NVIDIA GeForce GTX 650 Specs|url=https://www.techpowerup.com/gpu-specs/geforce-gtx-650.c2445|access-date=2021-12-09|website=TechPowerUp|language=en}}</ref>
| GK-106-400-A1
| rowspan="4" |2540
| rowspan="4" |221
| {{N/a}}
| 65
|
| ?
|-
! style="text-align:left;" | GeForce GTX 650 Ti
| October 9, 2012
| GK106-220-A1
| 928
| {{N/a}}
| {{N/a}}
| 928
| 1350<br />(5400)
| rowspan="2" | 4
| 768:64:16
| 86.4
| 14.8
| 59.4
| 1425.41
| 59.39
| 110
| $150 (130)
|-
! style="text-align:left;" | GeForce GTX 650 Ti (Boost)
| March 26, 2013
| GK106-240-A1
| rowspan="2" | 980
| rowspan="2" | 1032
| {{N/a}}
| rowspan="2" | 980
| 1502<br />(6008)
| 768:64:24
| 144.2
| 192
| 23.5
| 62.7
| 1505.28
| 62.72
| 134
| $170 (150)
|-
! rowspan="2" style="text-align:left;" | GeForce GTX 660
| September 13, 2012
| GK106-400-A1
| 1084
| 1502<br />(6008)
| 5
| 960:80:24
| 1.5+0.5<br />3
| 96.1+48.1<br />144.2
| 128+64<br />192
| 23.5
| 78.4
| 1881.6
| 78.40
| 140
| $230 (180)
|-
| August 22, 2012
| GK104-200-KD-A2
| rowspan="4" | 3540
| rowspan="4" | 294
| 823.5
| 888.5
| 899
| 823
| 1450<br />(5800)
| 6
| 1152:96:24<br />1152:96:32
| 1.5<br />2<br />3
| 139 <br /> 186
| 192<br />256
| 19.8
| 79
| 2108.6
| 79.06
| 130
| OEM
|-
! style="text-align:left;" | GeForce GTX 660 Ti
| August 16, 2012
| GK104-300-KD-A2
| rowspan="2" | 915
| rowspan="2" | 980
| 1058
| rowspan="2" | 915
| 1502<br />(6008)
| rowspan="2" | 7
| 1344:112:24
| 2
| 96.1+48.1<br />144.2
| 128+64<br />192
| 22.0
| 102.5
| 2459.52
| 102.48
| 150
| $300
|-
! style="text-align:left;" | GeForce GTX 670
| May 10, 2012
| GK104-325-A2
| 1084
| 1502<br />(6008)
| 1344:112:32
| rowspan="2" | 2<br />4
| 192.256
| rowspan="2" | 256
| 29.3
| 102.5
| 2459.52
| 102.48
| 170
| $400
|-
! style="text-align:left;" | GeForce GTX 680
| March 22, 2012
| GK104-400-A2
| 1006<ref name="gtx680-nvidia-paper">{{cite web |url= http://www.geforce.com/Active/en_US/en_US/pdf/GeForce-GTX-680-Whitepaper-FINAL.pdf |title= Nvidia GeForce GTX 680 Whitepaper.pdf |url-status= dead |archive-url= https://web.archive.org/web/20120417045615/http://www.geforce.com/Active/en_US/en_US/pdf/GeForce-GTX-680-Whitepaper-FINAL.pdf |archive-date= April 17, 2012 |df= mdy-all}} {{small|(1405KB)}}, page 6 of 29</ref>
| 1058
| 1110
| 1006
| 1502<br />(6008)
| 8
| 1536:128:32
| 192.256
| 32.2
| 128.8
| 3090.43
| 128.77
| 195
| $500
|-
! style="text-align:left;" | GeForce GTX 690
| April 29, 2012
| 2x GK104-355-A2
| 2x 3540
| 2x 294
| 915
| 1019
| 1058
| 915
| 1502<br />(6008)
| 2x 8
| 2x 1536:128:32
| 2x 2
| 2x 192.256
| 2x 256
| 2x 29.28
| 2x 117.12
| 2x 2810.88
| 2x 117.12
| 300
| $1000
|-
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="5" | Clock rate
! rowspan="2" | SM count
! rowspan="2" | Core config {{efn|name=geforce 600 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" | Memory configuration
! colspan="2" | [[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]]) {{efn|name=geforce 600 10|To calculate the processing power see [[Kepler (microarchitecture)#Performance]], or [[Fermi (microarchitecture)#Performance]].}}
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | Release Price (USD)
|-
! Core ([[Hertz|MHz]])
! Average Boost ([[Hertz|MHz]])
! Max Boost ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! DRAM type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Vulkan (API)|Vulkan]] {{efn|name=geforce 600 11}}
! [[Direct3D]]
! [[OpenGL]]
! [[OpenCL]]
|}
{{notelist}}
=== GeForce 700 series ===
{{Further|GeForce 700 series|Kepler (microarchitecture)}}
The GeForce 700 series for desktop. The GM107-chips are [[Maxwell (microarchitecture)|Maxwell]]-based, the GF1xx are [[Fermi (microarchitecture)|Fermi]]-based, and the GKxxx-chips [[Kepler (microarchitecture)|Kepler]].
* Improve [[Nvidia NVENC|NVENC]]
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model{{spaces}}name
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="4" | Clock rate
! rowspan="2" | SMX count
! rowspan="2" | Core config {{efn|name=geforce 700 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" | Memory configuration
! colspan="2" | [[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]]) {{efn|name=geforce 700 9|To calculate the processing power see [[Maxwell (microarchitecture)#Performance]], or [[Kepler (microarchitecture)#Performance]].}}
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | Release Price (USD)
|-
! Base ([[Hertz|MHz]])
! Average Boost ([[Hertz|MHz]])
! Max Boost {{efn|name=geforce 700 2|Max Boost depends on ASIC quality. For example, some GTX TITAN with over 80% ASIC quality can hit 1019 MHz by default, lower ASIC quality will be 1006 MHz or 993 MHz.}} ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! DRAM type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Vulkan (API)|Vulkan]] {{efn|name=geforce 700 11|Maxwell supports Vulkan version 1.3, while Kepler only support Vulkan version 1.2, Fermi does not support the Vulkan API at all.<ref name="vulkandrv" />}}
! [[Direct3D]] {{efn|name=geforce 700 3|Kepler supports some optional 11.1 features on [[Direct3D feature level|feature level]] 11_0 through the Direct3D 11.1 API, however Nvidia did not enable four non-gaming features to qualify Kepler for level 11_1.<ref>{{cite web |url=http://www.guru3d.com/news_story/nvidia_kepler_not_fully_compliant_with_directx_11_1.html |title=Nvidia Kepler not fully compliant with Direct3D 11.1 |website=Guru3d.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222113454/http://www.guru3d.com/news_story/nvidia_kepler_not_fully_compliant_with_directx_11_1.html |archive-date=2015-12-22 |url-status=live}}</ref><ref>{{Cite web|url=https://brightsideofnews.com/blog/nvidia-doesnt-fully-support-directx-111-with-kepler-gpus2c-bute280a6/|archiveurl=https://web.archive.org/web/20130903174514/http://www.brightsideofnews.com/news/2012/11/21/nvidia-doesnt-fully-support-directx-111-with-kepler-gpus2c-bute280a6.aspx|url-status=dead|title=Nvidia Doesn't Fully Support DirectX 11.1 with Kepler GPUs, But... |date=November 21, 2012 |archivedate=September 3, 2013}}</ref>}}
! [[OpenGL]]
! [[OpenCL]]
|-
! style="text-align:left;" | GeForce GT{{spaces}}705<ref>{{cite web |url=http://www.techpowerup.com/gpudb/2578/geforce-gt-705.html |title=Nvidia GeForce GT 705 {{pipe}} techPowerUp GPU Database |website=Techpowerup.com |access-date=2015-12-11 |archive-url=https://archive.today/20140615091732/http://www.techpowerup.com/gpudb/2578/geforce-gt-705.html |archive-date=2014-06-15 |url-status=live}}</ref> {{efn|name=geforce 700 4|The GeForce GT 705 (OEM) is a rebranded GeForce GT 610, which itself is a rebranded GeForce GT 520.}}
| rowspan="2" | March 27, 2014
| GF119-300-A1
| [[TSMC]]<br />40 nm
| 292
| 79
| PCIe 2.0 x16
| 810
| {{n/a}}
| {{n/a}}
| 898<br />(1796)
| rowspan="4" | 1
| 48:8:4
| 0.5<br />1
| rowspan="2" | 14.4
| rowspan="2" | DDR3
| 64
| 3.24
| 6.5
| 155.5
| 19.4
| n/a
| rowspan="21" | 12
| rowspan="20" | 4.6
| 1.1
| 29
| rowspan="2" | OEM
|-
! rowspan="2" style="text-align:left;" | GeForce GT{{spaces}}710<ref>{{cite web |url=http://www.techpowerup.com/gpudb/1990/geforce-gt-710.html |title=Nvidia GeForce GT 710 {{pipe}} techPowerUp GPU Database |website=Techpowerup.com |access-date=2015-12-11 |archive-url=https://archive.today/20140615091722/http://www.techpowerup.com/gpudb/1990/geforce-gt-710.html |archive-date=2014-06-15 |url-status=live}}</ref>
| GK208-301-A1
| rowspan="5" | TSMC<br />[[28 nm]]
| rowspan="5" | 1020
| rowspan="5" | 79
| PCIe 2.0 x8
| 823
| {{n/a}}
| {{n/a}}
| 900 (1800)
| 192:16:8
| 0.5
| rowspan="5" | 64
| 6.6
| 13.2
| 316.0
| 13.2
| 1.2
| rowspan="5" | 1.2
|
|-
| January 26, 2016
| GK208-203-B1
| PCIe 2.0 x8, PCIe x1
| 954
| {{n/a}}
| {{n/a}}
| 900 (1800)<br />1253 (5010)
| 192:16:8
| 1<br />2
| 14.4<br />40.0
| rowspan="2" | DDR3<br />GDDR5
| 7.6
| 15.3
| 366
| 15.3
|
| rowspan="2" | 19
| $35–45
|-
! style="text-align:left;" | GeForce GT{{spaces}}720<ref>{{cite web |url=http://www.techpowerup.com/gpudb/1989/geforce-gt-720.html |title=Nvidia GeForce GT 720 {{pipe}} techPowerUp GPU Database |website=Techpowerup.com |access-date=2015-12-11 |archive-url=https://archive.today/20140615091724/http://www.techpowerup.com/gpudb/1989/geforce-gt-720.html |archive-date=2014-06-15 |url-status=live}}</ref>
| March 27, 2014
| GK208-201-B1
| rowspan="3" | PCIe 2.0 x8
| 797
| {{n/a}}
| {{n/a}}
| 900 (1800)<br />1253 (5010)
| 192:16:8
| 1<br />2
| 14.4<br />40.0
| 6.4
| 12.8
|
| 12.8
|
| $49–59
|-
! rowspan="3" style="text-align:left;" | GeForce GT{{spaces}}730 <br /><ref name="gt730">{{cite web |url=http://www.geforce.com/hardware/desktop-gpus/geforce-gt-730/specifications |title=GT 730 {{pipe}} Specifications |website=GeForce.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151212232644/http://www.geforce.com/hardware/desktop-gpus/geforce-gt-730/specifications |archive-date=2015-12-12 |url-status=live}}</ref>{{efn|name=geforce 700 5|The GeForce GT 730 (DDR3, 64-bit) is a rebranded GeForce GT 630 (Rev. 2).}} {{efn|name=geforce 700 6|The GeForce GT 730 (DDR3, 128-bit) is a rebranded GeForce GT 630 (128-bit).}}
| rowspan="3" | June 18, 2014
| GK208-301-A1
| 902
| {{n/a}}
| {{n/a}}
| 900<br />(1800)
| rowspan="5" | 2
| 384:16:8
| 1<ref name="1GBgt730">{{cite web|url=https://www.zotac.com/product/graphics_card/gt-730-1gb|title=GeForce GT 730 1GB-ZOTAC|work=[[ZOTAC]]|access-date=July 12, 2017|archive-url=https://web.archive.org/web/20160719212714/https://www.zotac.com/product/graphics_card/gt-730-1gb|archive-date=July 19, 2016|url-status=live}}</ref><br />2<br />4
| 14.4
| DDR3
| 7.22
| 14.44
| rowspan="2" |692.7
| rowspan="2" |28.9
|
| 23
| rowspan="3" | $69–79
|-
| GK208-400-A1
| 902
| {{n/a}}
| {{n/a}}
| 1250<br />(5000)
| 384:16:8
| 1<br />2<ref>{{cite web |url=http://www.evga.com/products/product.aspx?pn=02G-P3-3733-KR |title=EVGA - Products - EVGA GeForce GT 730 2GB (Low Profile) - 02G-P3-3733-KR |access-date=2017-04-29 |archive-url=https://web.archive.org/web/20170215122047/http://www.evga.com/Products/Product.aspx?pn=02G-P3-3733-KR |archive-date=2017-02-15 |url-status=live}}</ref>
| 40.0
| GDDR5
| 7.22
| 14.44
|
| 25
|-
| GF108
| TSMC<br />40 nm
| 585
| 116
| PCIe 2.0 x16
| 700
| {{n/a}}
| {{n/a}}
| 900<br />(1800)
| 96:16:4
| rowspan="3" | 1<br />2<br />4
| 28.8
| rowspan="2" | DDR3
| 128
|
|
| 268.8
| 33.6
| n/a
| 1.1
| 49
|-
! rowspan="2" style="text-align:left;" | GeForce GT{{spaces}}740 {{efn|name=geforce 700 7|The GeForce GT 740 (OEM) is a rebranded GeForce GTX 650}}
| rowspan="2" | May 29, 2014
| rowspan="2" | GK107-425-A2
| rowspan="14" | [[TSMC]]<br />[[28 nanometer|28HP]]
| rowspan="2" | 1270
| rowspan="2" | 118
| rowspan="14" | PCIe 3.0 x16
| 993
| {{n/a}}
| {{n/a}}
| 891<br />(1782)
| 384:32:16
| 28.5
| rowspan="5" | 128
| 15.9
| 31.8
| rowspan="2" |762.6
| rowspan="2" |31.8
| 1.2
| rowspan="14" | 1.2
| rowspan="2" | 64
| rowspan="2" | $89–99
|-
| 993
| {{n/a}}
| {{n/a}}
| 1252<br />(5008)
| 384:32:16
| 80.1
| GDDR5
| 15.9
| 31.8
|
|-
! style="text-align:left;" | GeForce GTX{{spaces}}745
| rowspan="3" | February 18, 2014
| GM107-220-A2
| rowspan="3" | 1870
| rowspan="3" | 148
| 1033
| {{unk}}
| {{unk}}
| 900<br />(1800)
| 3
| 384:24:16
| 1<br />4
| 28.8
| DDR3
| 16.5
| 24.8
| 793.3
| 24.8
| rowspan="3" | 1.3
| rowspan="2" | 55
| OEM
|-
! style="text-align:left;" | GeForce GTX 750
| GM107-300-A2
| 1020
| 1085
| 1163
| 1250<br />(5000)
| 4
| 512:32:16
| 1<br />2<br />4<ref>{{cite web|title=AFOX GTX 750 4 GB Specs|url=https://www.techpowerup.com/gpu-specs/afox-gtx-750-4-gb.b9074|access-date=2021-08-11|website=TechPowerUp|language=en}}</ref>
| 80
| rowspan="11" | GDDR5
| 16.3
| 32.6
| 1044.5
| 32.6
| $119
|-
! style="text-align:left;" | GeForce GTX 750 Ti
| GM107-400-A2
| 1020
| 1085
| 1200
| 1350<br />(5400)
| 5
| 640:40:16
| 1<br />2<br />4
| 86.4
| 16.3
| 40.8
| 1305.6
| 40.8
| 60
| $149
|-
! style="text-align:left;" | GeForce GTX 760 (192-bit)
| October 17, 2013
| GK104-200-KD-A2
| rowspan="4" | 3540
| rowspan="4" | 294
| 824
| 888
| 889
| 1450<br />(5800)
| rowspan="2" | 6
| 1152:96:24
| 1.5<br />3
| 139.2
| 192
| 19.8
| 79.1
| 1896.2
| 79.0
| rowspan="9" | 1.2
| 130
| OEM
|-
! style="text-align:left;" | GeForce GTX 760
| June 25, 2013
| GK104-225-A2
| 980
| 1033
| 1124
| 1502<br />(6008)
| 1152:96:32
| 2<br />4
| 192.3
| rowspan="3" | 256
| 31.4 {{efn|name=geforce 700 10|As a Kepler GPC is able to rasterize 8 pixels per clock, fully enabled GK110 GPUs (780 Ti/TITAN Black) can only output 40 pixels per clock (5 GPCs), despite 48 ROPs and all SMX units being physically present. For GTX 780 and GTX 760, multiple GPC configurations with differing pixel fillrate are possible, depending on which SMXs were disabled in the chip: 5/4 GPCs, or 4/3 GPCs, respectively.}}
| 94
| 2257.9
| 94.1
| rowspan="2" | 170
| $249 ($219)
|-
! style="text-align:left;" | GeForce GTX 760 Ti {{efn|name=geforce 700 8|The GeForce GTX 760 Ti (OEM) is a rebranded GeForce GTX 670.}}
| September 27, 2013<ref>{{cite web |url=https://www.techpowerup.com/gpudb/2491/geforce-gtx-760-ti-oem |title=NVIDIA GeForce GTX 760 Ti OEM Specs |access-date=2018-08-06 |archive-url=https://archive.today/20160530101755/http://www.techpowerup.com/gpudb/2491/geforce-gtx-760-ti-oem |archive-date=2016-05-30 |url-status=live}}</ref>
| GK104
| 915
| 980
| 1084
| 1502<br />(6008)
| 7
| 1344:112:32
| 2
| 192.3
| 29.3
| 102.5
| 2459.5
| 102.5
| OEM
|-
! style="text-align:left;" | GeForce GTX 770
| May 30, 2013
| GK104-425-A2
| 1046
| 1085
| 1130
| 1752.5<br />(7010)
| 8
| 1536:128:32
| 2 4
| 224
| 33.5
| 134
| 3213.3
| 133.9
| rowspan="5" | 230
| $399 ($329)
|-
! style="text-align:left;" | GeForce GTX 780
| May 23, 2013
| GK110-300-A1
| rowspan="4" | 7080
| rowspan="4" | 561
| 863
| 900
| 1002
| 1502<br />(6008)
| 12
| 2304:192:48
| 3 6<ref>{{cite web |url=http://www.evga.com/articles/00830/ |title=Articles - EVGA GeForce GTX 780 6GB Step-Up Available Now! |publisher=EVGA |date=2014-03-21 |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20160113203734/http://www.evga.com/articles/00830/ |archive-date=2016-01-13 |url-status=live}}</ref>
| 288.4
| rowspan="4" | 384
| 41.4 {{efn|name=geforce 700 10|As a Kepler GPC is able to rasterize 8 pixels per clock, fully enabled GK110 GPUs (780 Ti/TITAN Black) can only output 40 pixels per clock (5 GPCs), despite 48 ROPs and all SMX units being physically present. For GTX 780 and GTX 760, multiple GPC configurations with differing pixel fillrate are possible, depending on which SMXs were disabled in the chip: 5/4 GPCs, or 4/3 GPCs, respectively.}}
| 160.5
| 3976.7
| 165.7
| $649 ($499)
|-
! style="text-align:left;" | GeForce GTX 780 Ti<ref>{{cite web |url=http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-780-ti/specifications |title=GeForce GTX780 Ti. Specifications |website=Geforce.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151212021141/http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-780-ti/specifications |archive-date=2015-12-12 |url-status=live}}</ref><ref>{{cite web |url=http://videocardz.com/47508/videocardz-nvidia-geforce-gtx-780-ti-2880-cuda-cores |title=Nvidia GeForce GTX 780 Ti has 2880 CUDA cores |website=Videocardz.com |date=31 October 2013 |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222120538/http://videocardz.com/47508/videocardz-nvidia-geforce-gtx-780-ti-2880-cuda-cores |archive-date=2015-12-22 |url-status=live}}</ref><ref>{{cite web |url=http://web-engage.augure.com/pub/link/282593/04601926874847631383752919307-hl-com.com.html |title=PNY dévoile son nouveau foudre de guerre: la GeForce GTX 780 TI. |website=Web-engage.augure.com |access-date=2015-12-11 |url-status=dead |archive-url=https://web.archive.org/web/20131109211440/http://web-engage.augure.com/pub/link/282593/04601926874847631383752919307-hl-com.com.html |archive-date=November 9, 2013}}</ref>
| November 7, 2013
| GK110-425-B1
| 876
| 928
| 1019
| 1752.5<br />(7010)
| 15
| 2880:240:48
| 3
| 336.5
| 42.0 {{efn|name=geforce 700 10|As a Kepler GPC is able to rasterize 8 pixels per clock, fully enabled GK110 GPUs (780 Ti/TITAN Black) can only output 40 pixels per clock (5 GPCs), despite 48 ROPs and all SMX units being physically present. For GTX 780 and GTX 760, multiple GPC configurations with differing pixel fillrate are possible, depending on which SMXs were disabled in the chip: 5/4 GPCs, or 4/3 GPCs, respectively.}}
| 210.2
| 5045.7
| 210.2
| $699
|-
! style="text-align:left;" | GeForce GTX TITAN<ref>{{cite web |url=http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-titan |title=GeForce GTX TITAN |website=Geforce.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151205173714/http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-titan |archive-date=2015-12-05 |url-status=live}}</ref><ref>{{cite web |url=http://www.nvidia.com/titan-graphics-card |title=TITAN Graphics Card |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20130224082627/http://www.nvidia.com/titan-graphics-card |archive-date=2013-02-24 |url-status=live}}</ref><ref>{{cite web |url=http://www.anandtech.com/show/6760/nvidias-geforce-gtx-titan-part-1 |title=Nvidia's GeForce GTX Titan, Part 1: Titan For Gaming, Titan For Compute |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151204225432/http://www.anandtech.com/show/6760/nvidias-geforce-gtx-titan-part-1 |archive-date=2015-12-04 |url-status=dead}}</ref>
| February 21, 2013
| GK110-400-A1
| 837
| 876
| 993
| 1502<br />(6008)
| 14
| 2688:224:48
| rowspan="2" | 6
| 288.4
| 40.2
| 187.5
| 4499.7
| 1300<ref>{{cite web |title=Titan's Compute Performance (aka Ph.D Lust) - Nvidia's GeForce GTX Titan Review, Part 2: Titan's Performance Unveiled |url=http://www.anandtech.com/show/6774/nvidias-geforce-gtx-titan-part-2-titans-performance-unveiled/3 |url-status=dead |archive-url=https://web.archive.org/web/20151222141607/http://www.anandtech.com/show/6774/nvidias-geforce-gtx-titan-part-2-titans-performance-unveiled/3 |archive-date=2015-12-22 |access-date=2015-12-11 |website=Anandtech.com |quote=the calculated fp64 peak of Titan is 1.5 TFlops. However, under heavy load in fp64 mode, the card may underclock below the listed 837MHz to remain within the power and thermal specifications}}</ref><br />-1499.9
| rowspan="2" | $999
|-
! style="text-align:left;" | GeForce GTX TITAN Black
| February 18, 2014
| GK110-430-B1
| 889
| 980
| 1058
| 1752.5<br />(7010)
| 15
| 2880:240:48
| 336.5
| 42.7
| 213.4
| 5120.6
| 1706.9
|-
! style="text-align:left;" | GeForce GTX TITAN Z
| May 28, 2014
| 2x GK110-350-B1<ref>{{cite web|url=https://diy.pconline.com.cn/488/4883359_all.html|title=售价21999元!NV旗舰GTX TITAN Z评测-太平洋电脑网|date=5 June 2014|access-date=2020-08-16|archive-date=14 January 2023|archive-url=https://web.archive.org/web/20230114032336/https://diy.pconline.com.cn/488/4883359_all.html|url-status=live}}</ref>
| 2x<br />7080
| 2x<br />561
| 705
| 876
| {{unk}}
| 1752.5<br />(7010)
| 2x 15
| 2x<br />2880:240:48
| 2x 6
| 2x 336.5
| 2x 384
| 2x 33.8
| 2x 169
| 5046 x2
| 1682 x2<ref>{{cite web |title=NVIDIA GeForce GTX TITAN Z Specs |url=https://www.techpowerup.com/gpu-specs/geforce-gtx-titan-z.c2575 |access-date=2020-02-26 |website=TechPowerUp |language=en}}</ref>
| 4.5
| 375
| $2999
|-
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" | Transistors (million)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="4" | Clock rate
! rowspan="2" | SMX count
! rowspan="2" | Core config {{efn|name=geforce 700 1|[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s}}
! colspan="4" | Memory configuration
! colspan="2" | [[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]]) {{efn|name=geforce 700 9}}
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | Release Price (USD)
|-
! Base ([[Hertz|MHz]])
! Average Boost ([[Hertz|MHz]])
! Max Boost {{efn|name=geforce 700 2|Max Boost depends on ASIC quality. For example, some GTX TITAN with over 80% ASIC quality can hit 1019 MHz by default, lower ASIC quality will be 1006 MHz or 993 MHz.}} ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! DRAM type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Vulkan (API)|Vulkan]] {{efn|name=geforce 700 11}}
! [[Direct3D]] {{efn|name=geforce 700 3}}
! [[OpenGL]]
! [[OpenCL]]
|}
{{notelist}}
===GeForce 900 series===
{{Further|GeForce 900 series|Maxwell (microarchitecture)}}
* All models support the following [[Application programming interface|API]]s: [[Direct3D]] 12_1, [[OpenGL]] 4.6, [[OpenCL]] 3.0 and [[Vulkan (API)|Vulkan]] 1.3<ref name="vulkandrv">{{cite web | url=https://www.khronos.org/conformance/adopters/conformant-products | title=The Khronos Group | date=31 May 2022 | access-date=11 June 2017 | archive-date=28 January 2017 | archive-url=https://web.archive.org/web/20170128195542/https://www.khronos.org/conformance/adopters/conformant-products | url-status=live}}</ref> and [[CUDA]] 5.2
*Improve [[Nvidia NVENC|NVENC]] (YUV4:4:4, predictive lossless encoding).
*Add [[High Efficiency Video Coding|H265]] hardware support on GM20x
*GM108 does not have [[NVENC]] hardware encoder support.
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Process
! rowspan="2" | Transistors (billion)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock Speeds
! rowspan="2" | Core config{{efn|name=CoreConfig}}
! rowspan="2" |[[GPU cache|L2 Cache]]<br />([[Mebibyte|MiB]])
! colspan="4" | Memory
! colspan="2" | [[Fillrate]]{{efn|name=PerfValues}}
! colspan="2" | Processing power ([[GFLOPS]]){{efn|name=PerfValues}}{{efn|name=ProcessingPower}}
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | [[Scalable Link Interface#|SLI]] support
! Release price (USD)
|-
! Base ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s){{efn|name=PixelFillrate}}
! Texture ([[Texel (graphics)|GT]]/s){{efn|name=TextureFillrate}}
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! MSRP
|-
! style="text-align:left;" | GeForce GT 945A<ref>{{cite web |url=https://support.hp.com/at-de/document/c04996577 |title=Sprout Pro by HP |publisher=[[Hewlett-Packard|HP]] |access-date=2019-01-09 |archive-url=https://web.archive.org/web/20190109111211/https://support.hp.com/at-de/document/c04996577 |archive-date=2019-01-09 |url-status=live}}</ref><ref>{{cite web |url=https://devtalk.nvidia.com/default/topic/915766/linux-solaris-and-freebsd-driver-361-28-long-lived-branch-release-/ |title=Linux, Solaris, and FreeBSD driver 361.28 (long-lived branch release) |publisher=Nvidia |date=2016-02-09 |access-date=2016-02-10 |archive-url=https://web.archive.org/web/20160216022209/https://devtalk.nvidia.com/default/topic/915766/linux-solaris-and-freebsd-driver-361-28-long-lived-branch-release-/ |archive-date=2016-02-16 |url-status=live}}</ref><ref>{{cite web |url=https://www.techpowerup.com/gpudb/2813/geforce-945a |title=NVIDIA GeForce 945A Specs |access-date=2018-08-06}}{{dead link|date=June 2022|bot=medic}}{{cbignore|bot=medic}}</ref>
| February, 2016
| GM108
| rowspan="9" | [[TSMC]]<br />[[32 nm process|28HP]]
| {{unk}}
| {{unk}}
| PCIe 3.0 x8
| 1072
| 1176
| 1.8
| 512:24:8 (4)
| ?
| 1 / 2
| 14.4
| [[DDR3]] / [[GDDR5]]
| 64
| 8.5<br />9.4
| 25.7<br />28.2
| 1,097.7<br />1,204.2
| 34.3<br />37.6
| 33
| {{No}}
| {{okay|OEM}}
|-
! style="text-align:left;" | GeForce GTX 950<ref>{{cite web |url=http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-950/specifications |title=GTX 950 {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151212232816/http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-950/specifications |archive-date=2015-12-12 |url-status=live}}</ref>
| August 20, 2015
| GM206-250
| rowspan="3" | 2.94
| rowspan="3" | 227
| rowspan="8" | PCIe 3.0 x16
| 1024
| 1188
| 6.6
| 768:48:32 (6)
| rowspan="4" | 1
| rowspan="2" | 2
| 105.7
| rowspan="8" | [[GDDR5]]
| rowspan="3" | 128
| 32.7<br />38.0
| 49.1<br />57.0
| 1,572.8<br />1,824.7
| 49.1<br />57.0
| 90 (75{{efn|name=GTX950NPC|Some GTX950 cards were released without power connector powered only by PCIe slot. These had limited power consumption and TPD to 75W.<ref>{{Cite web|url=https://www.anandtech.com/show/10250/gigabyte-adds-geforce-gtx-950-with-75w-power-consumption-to-lineup|title=GIGABYTE Adds 75W GeForce GTX 950 to Lineup|first=Anton|last=Shilov|website=www.anandtech.com|access-date=11 March 2020|archive-date=28 July 2020|archive-url=https://web.archive.org/web/20200728210134/https://www.anandtech.com/show/10250/gigabyte-adds-geforce-gtx-950-with-75w-power-consumption-to-lineup |url-status=dead}}</ref>}})
| rowspan="4" | 2-way [[Scalable Link Interface#|SLI]]
| $159
|-
! style="text-align:left;" | GeForce GTX 950 (OEM)<ref>{{cite web|url=https://www.geforce.com/hardware/desktop-gpus/geforce-gtx-950-oem/specifications|title=GeForce GTX 950 (OEM) {{pipe}} Specifications {{pipe}} GeForce|website=geforce.com|access-date=2019-01-09|archive-url=https://web.archive.org/web/20180923200607/https://www.geforce.com/hardware/desktop-gpus/geforce-gtx-950-oem/specifications|archive-date=2018-09-23|url-status=live}}</ref>
| {{unk}}
| GM206
| 935
| {{unk}}
| 5
| rowspan="2" | 1024:64:32 (8)
| 80.0
| 29.9<br />
| 59.8<br />
| 1,914.9<br />,
| 59.8<br />
| {{unk}}
| {{okay|OEM}}
|-
! style="text-align:left;" | GeForce GTX 960<ref>{{cite web |url=http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-960/specifications |title=GTX 960 {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151212024030/http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-960/specifications |archive-date=2015-12-12 |url-status=live}}</ref>
| January 22, 2015
| GM206-300
| 1127
| 1178
| 7
| 2<br />4
| 112.1
| 36.0<br />37.6
| 72.1<br />75.3
| 2,308.0<br />2,412.5
| 72.1<br />75.3
| 120
| $199
|-
! style="text-align:left;" | GeForce GTX 960 (OEM)<ref>{{cite web |url=https://www.geforce.com/hardware/desktop-gpus/geforce-gtx-960-oem/specifications |title=GeForce GTX 960 (OEM) {{pipe}} Specifications {{pipe}} GeForce |website=geforce.com |access-date=2019-01-09 |archive-url=https://web.archive.org/web/20180923200618/https://www.geforce.com/hardware/desktop-gpus/geforce-gtx-960-oem/specifications |archive-date=2018-09-23 |url-status=live}}</ref>
| {{unk}}
| GM204
| rowspan="3" | 5.2
| rowspan="3" | 398
| 924
| {{unk}}
| 5
| 1280:80:48 (10)
| 3
| 120.0
| 192
| 44.3<br />
| 73.9<br />
| 2,365.4<br />,
| 73.9<br />
| {{unk}}
| {{okay|OEM}}
|-
! style="text-align:left;" | GeForce GTX 970<ref>{{cite web |url=http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-970/specifications |title=GTX 970 {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151207185709/http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-970/specifications |archive-date=2015-12-07 |url-status=live}}</ref>
| September 18, 2014
| GM204-200
| 1050
| 1178
| rowspan="4" | 7
| 1664:104:56 (13)
| 1.75
| 3.5 +<br />0.5{{efn|name=GTX960MemoryMess|For accessing its memory, the GTX 970 stripes data across 7 of its 8 32-bit physical memory lanes, at 196 GB/s. The last 1/8 of its memory (0.5 GiB on a 4 GiB card) is accessed on a non-interleaved solitary 32-bit connection at 28 GB/s, one seventh the speed of the rest of the memory space. Because this smaller memory pool uses the same connection as the 7th lane to the larger main pool, it contends with accesses to the larger block reducing the effective memory bandwidth not adding to it as an independent connection could.<ref>{{cite news |last1=Wasson |first1=Scott |date=January 26, 2015 |title=Nvidia: the GeForce GTX 970 works exactly as intended, A look inside the card's unusual memory config |url=http://techreport.com/review/27724/nvidia-the-geforce-gtx-970-works-exactly-as-intended |newspaper=[[The Tech Report]] |page=1 |access-date=2015-01-26 |archive-url=https://web.archive.org/web/20150128051624/http://techreport.com/review/27724/nvidia-the-geforce-gtx-970-works-exactly-as-intended |archive-date=January 28, 2015 |url-status=live}}</ref>}}
| 196.3 +<br />28.0{{efn|name=GTX960MemoryMess}}
| 224 +<br />32{{efn|name=GTX960MemoryMess}}
| 58.8<br />65.9
| 109.2<br />122.5
| 3,494.4<br />3,920.3
| 109.2<br />122.5
| 145
| rowspan="4" | 4-way [[Scalable Link Interface#|SLI]]
| $329
|-
! style="text-align:left;" | GeForce GTX 980<ref>{{cite web |url=http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-980/specifications |title=GTX 980 {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151208184430/http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-980/specifications |archive-date=2015-12-08 |url-status=live}}</ref>
| September 18, 2014
| GM204-400
| 1126
| 1216
| 2048:128:64 (16)
| 2
| 4
| 224.3
| 256
| 72.0<br />77.8
| 144.1<br />155.6
| 4,612.0<br />4,980.7
| 144.1<br />155.6
|
|
|-
! style="text-align:left;" | GeForce GTX 980 Ti<ref>{{cite web |url=http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-980-ti/specifications |title=GTX 980 Ti {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151211174512/http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-980-ti/specifications |archive-date=2015-12-11 |url-status=live}}</ref>
| June 1, 2015
| GM200-310
| rowspan="2" | 8
| rowspan="2" | 601
| rowspan="2" | 1000
| rowspan="2" | 1075
| 2816:176:96 (22)
| rowspan="2" | 3
| 6
| rowspan="2" | 336.5
| rowspan="2" | 384
| rowspan="2" | 96.0<br />103.2
| 176.0<br />189.2
| 5,632.0<br />6,054.4
| 176.0<br />189.2
| rowspan="2" | 250
| $649
|-
! style="text-align:left;" | GeForce GTX TITAN X<ref>{{cite web |url=http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-titan-x/specifications |title=GTX TITAN X {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151205173930/http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-titan-x/specifications |archive-date=2015-12-05 |url-status=live}}</ref>
| March 17, 2015
| GM200-400
| 3072:192:96 (24)
| 12
| 192.0<br />206.4
| 6,144.0<br />6,604.8
| 192.0<br />206.4
| $999
|}
{{notelist|refs=
{{efn|name=CoreConfig|Main [[Unified shader model|shader processors]] : [[texture mapping unit]]s : [[render output unit]]s (streaming multiprocessors)}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the number of ROPs multiplied by the respective core clock speed.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the respective core clock speed.}}
{{efn|name=ProcessingPower|To calculate the processing power see [[Maxwell (microarchitecture)#Performance]].}}
{{efn|name=PerfValues|Base clock, Boost clock}}
}}
===GeForce 10 series===
{{Further|GeForce 10 series|Pascal (microarchitecture)}}
* Supported display standards: [[DisplayPort 1.4|DP 1.4]] (no [[Display Stream Compression|DSC]]), [[HDMI 2.0b]], [[Digital Visual Interface|Dual-link DVI]]{{efn|The NVIDIA TITAN Xp and the Founders Edition GTX 1080 Ti do not have a dual-link DVI port, but a DisplayPort to single-link DVI adapter is included in the box.}}<ref>{{cite web|url=http://www.geforce.com/hardware/10series/geforce-gtx-1080|title=GTX 1080 Graphics Card|author1=[[Nvidia]]|access-date=May 7, 2016|archive-url=https://web.archive.org/web/20160507083310/http://www.geforce.com/hardware/10series/geforce-gtx-1080|archive-date=May 7, 2016|url-status=live}}</ref>
* Supported [[Application programming interface|APIs]]: [[Direct3D]] 12 (12_1), [[OpenGL]] 4.6, [[OpenCL]] 3.0, [[Vulkan (API)|Vulkan]] 1.3<ref name="vulkandrv" /> and [[CUDA]] 6.1
* Improved [[Nvidia NVENC|NVENC]] ([[HEVC]] Main10, decode [[8K resolution|8K30]], etc.)
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model{{spaces}}name
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Process
! rowspan="2" | Transistors (billion)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speeds
! rowspan="2" | Core config {{efn|name=CoreConfig}}
! rowspan="2" | [[GPU cache|L2 Cache]]<br />([[Mebibyte|MiB]])
! colspan="4" | Memory
! colspan="2" | [[Fillrate]] {{efn|name=PerfValues}}
! colspan="3" | Processing power ([[GFLOPS]]) {{efn|name=PerfValues}}{{efn|name=ProcessingPower}}
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | [[Scalable Link Interface|SLI]] support
! colspan="2" | Release price (USD)
|-
! Base core ([[Hertz|MHz]])
! Boost core ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s) {{efn|name=PixelFillrate}}{{efn|As the GTX 1070 has one of the four GP104 GPCs disabled in the die, its frontend is only able to rasterize 48 pixels per clock.<ref name="smith1">{{cite web|url=http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/29|title=The Nvidia GeForce GTX 1080 & GTX 1070 Founders Editions Review: Kicking Off the FinFET Generation|last=Smith|first=Ryan|access-date=2016-07-21|archive-url=https://web.archive.org/web/20160723082331/http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/29|archive-date=2016-07-23|url-status=dead}}</ref> Analogically, the GTX 1060 features only two GPCs on its GP106 die, meaning that its frontend can only rasterize 32 pixels per clock. The remaining backend ROPs can still be used for tasks such as MSAA.<ref name="anandtech11">{{cite web |url=http://www.anandtech.com/show/10540/the-geforce-gtx-1060-founders-edition-asus-strix-review |title=The GeForce GTX 1060 Founders Edition & ASUS Strix GTX 1060 Review |access-date=2017-02-17 |archive-url=https://web.archive.org/web/20170218064526/http://www.anandtech.com/show/10540/the-geforce-gtx-1060-founders-edition-asus-strix-review |archive-date=2017-02-18 |url-status=dead}}</ref>}}
! Texture ([[Texel (graphics)|GT]]/s) {{efn|name=TextureFillrate}}
! [[Half precision floating-point format|Half precision]]
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! MSRP
! Founders Edition
|-
! rowspan="2" style="text-align:left;" | GeForce GT{{spaces}}1010<ref>{{cite web|title=NVIDIA GeForce GT 1010 Specs|url=https://www.techpowerup.com/gpu-specs/geforce-gt-1010.c3762|access-date=2021-02-14|website=TechPowerUp|language=en}}</ref><ref>{{cite web|title=NVIDIA GeForce GT 1010 DDR4 Specs|url=https://www.techpowerup.com/gpu-specs/geforce-gt-1010-ddr4.c3874|access-date=2021-02-14|website=TechPowerUp|language=en}}</ref>
| rowspan="2" | January 13, 2021
| rowspan="2" | GP108-200-A1
| rowspan="7" |[[Samsung Electronics|Samsung]]<br />[[14 nm process|14LPP]]
| rowspan="4" | 1.8
| rowspan="4" | 74
| rowspan="4" | PCIe 3.0 x4
| 1228
| 1468
| 5
| rowspan="2" | 256:16:16<br />(2) (1)
| rowspan="2" | 0.25
| rowspan="5" | 2
| 40.1
| [[GDDR5]]
| rowspan="4" | 64
| 9.8<br/>11.8
| 19.7<br/>23.5
| ?
| 628.7<br/>751.6
| 26.2<br/>31.3
| 30
| rowspan="14" {{No}}
| rowspan="2" {{okay|OEM}}
| rowspan="12" {{N/a}}
|-
| rowspan="2" | 1152
| 1380
| rowspan="2" | 2.1
| rowspan="2" | 16.8
| rowspan="2" | [[DDR4]]
| 9.2<br/>11.0
| 18.4<br/>22.0
| ?
| 590.0<br />706.6
| 24.6<br/>29.4
| rowspan="2" | 20
|-
! rowspan="2" style="text-align:left;" | GeForce GT{{spaces}}1030<ref name="gt1030">{{cite web |url=http://www.geforce.com/hardware/desktop-gpus/geforce-gt-1030/specifications |title=GeForce GT 1030 {{pipe}} Specifications {{pipe}} GeForce |access-date=2017-05-17 |archive-url=https://web.archive.org/web/20170520204241/http://www.geforce.com/hardware/desktop-gpus/geforce-gt-1030/specifications |archive-date=2017-05-20 |url-status=live}}</ref><ref name="gt1030ddr4">{{cite web |url=https://www.techspot.com/review/1658-geforce-gt-1030-abomination/ |title=GeForce GT 1030: The DDR4 Abomination Benchmarked |date=9 July 2018 |access-date=2018-08-22 |archive-url=https://web.archive.org/web/20180822213906/https://www.techspot.com/review/1658-geforce-gt-1030-abomination/ |archive-date=2018-08-22 |url-status=live}}</ref><ref>{{cite web |url=https://www.msi.com/Graphics-card/GeForce-GT-1030-AERO-ITX-2G-OC.html#hero-specification |title=Overview GeForce GT 1030 AERO ITX 2G OC |access-date=2017-05-26 |archive-url=https://web.archive.org/web/20170630025507/https://www.msi.com/Graphics-card/GeForce-GT-1030-AERO-ITX-2G-OC.html#hero-specification |archive-date=2017-06-30 |url-status=live}}</ref><ref>{{cite web |url=http://www.palit.com/palit/vgapro.php?id=2883&lang=en&pn=NE5103000646-1080F&tab=sp |title=::Palit Products - GeForce GT 1030 :: |access-date=2017-05-26 |archive-url=https://web.archive.org/web/20170614113347/http://www.palit.com/palit/vgapro.php?id=2883&lang=en&pn=NE5103000646-1080F&tab=sp |archive-date=2017-06-14 |url-status=live}}</ref>
| March 12, 2018
| GP108-310-A1
| 1379
| rowspan="2" | 384:24:16<br />(3) (1)
| rowspan="2" | 0.5
| 18.4<br />22.0
| 27.6<br />33.0
| 13.8<br />16.5
| 884.7<br />1,059.0
| 27.6<br />33.0
| $79
|-
| May 17, 2017
| GP108-300-A1
| 1227
| 1468
| 6
| 48.0
| rowspan="8" | [[GDDR5]]
| 19.6<br />23.4
| 29.4<br />35.2
| 14.7<br />17.6
| 942.3<br />1,127.4
| 29.4<br />35.2
| 30
| $69
|-
! rowspan="2" style="text-align:left;" | GeForce GTX 1050<ref name="Geforce GTX 1050 Family">{{cite web|url=https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1050/|title=GeForce GTX 1050 Graphics Card|website=nvidia.com|language=en-us|access-date=2018-12-27|archive-url=https://web.archive.org/web/20161222152032/https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1050/|archive-date=2016-12-22|url-status=live}}</ref>
| October 25, 2016
| GP107-300-A1
| rowspan="3" | 3.3
| rowspan="3" | 132
| rowspan="17" | PCIe 3.0 x16
| 1354
| 1455
| rowspan="3" | 7
| 640:40:32<br />(5) (2)
| 1
| 112.0
| 128
| 43.3<br />46.6
| 54.1<br />58.8
| 27.0<br />29.1
| 1,733.1<br />1,862.4
| 54.1<br />58.2
| rowspan="3" | 75
| rowspan="2" | $109
|-
| May 21, 2018
| GP107-301-A1
| 1392
| 1518
| 768:48:24<br />(6) (2)
| 0.75
| 3
| 84.0
| 96
| 33.4<br />36.4
| 66.8<br />72.9
| 33.4<br />36.4
| 2,138.1<br />2,331.6
| 66.8<br />72.9
|-
! style="text-align:left;" | GeForce GTX 1050 Ti<ref name="Geforce GTX 1050 Family" />
| October 25, 2016
| GP107-400-A1
| 1290
| 1392
| 768:48:32<br />(6) (2)
| 1
| 4
| 112.0
| 128
| 41.2<br />44.5
| 61.9<br />66.8
| 30.9<br />33.4
| 1,981.4<br />2,138.1
| 61.9<br />66.8
| $139
|-
! rowspan="7" style="text-align:left;" | GeForce GTX 1060 <br/><ref name="Geforce GTX 1060">{{cite web |url=http://www.geforce.com/hardware/10series/geforce-gtx-1060|title=GTX 1060 Graphics Card|website=nvidia.com|access-date=August 18, 2016|archive-url=https://web.archive.org/web/20160816081651/http://www.geforce.com/hardware/10series/geforce-gtx-1060|archive-date=August 16, 2016|url-status=live}}</ref><ref>{{cite web |url=https://wccftech.com/nvidia-5-gb-geforce-gtx-1060-chinese-cafes/ |title=NVIDIA Preps Cut Down, 5 GB GTX 1060 Graphics Card For Cafes |website=wccftech.com |date=26 December 2017 |access-date=2018-12-29 |archive-url=https://web.archive.org/web/20181220125200/https://wccftech.com/nvidia-5-gb-geforce-gtx-1060-chinese-cafes/ |archive-date=2018-12-20 |url-status=live}}</ref><ref>{{cite web |url=https://videocardz.com/68807/nvidia-launches-geforce-gtx-1080-11-gbps-and-gtx-1060-9-gbps |title=NVIDIA launches GeForce GTX 1080 11 Gbps and GTX 1060 9 Gbps |website=videocardz.com |date=20 April 2017 |access-date=2018-12-29 |archive-url=https://web.archive.org/web/20180902115910/https://videocardz.com/68807/nvidia-launches-geforce-gtx-1080-11-gbps-and-gtx-1060-9-gbps |archive-date=2018-09-02 |url-status=live}}</ref><ref>{{cite web |url=https://hexus.net/tech/news/graphics/123461-gigabyte-may-readying-geforce-gtx-1060-gddr5x/ |title=Gigabyte may be readying a GeForce GTX 1060 with GDDR5X |website=hexus.net |date=19 October 2018 |access-date=2018-12-29 |archive-url=https://web.archive.org/web/20181230080857/https://hexus.net/tech/news/graphics/123461-gigabyte-may-readying-geforce-gtx-1060-gddr5x/ |archive-date=2018-12-30 |url-status=live}}</ref>
| December 25, 2016
| GP104-140-A1
| rowspan="14" | [[TSMC]]<br />[[14 nm process|16FF]]
| 7.2
| 314
| rowspan="8" | 1506
| rowspan="7" | 1708
| rowspan="6" | 8
| rowspan="2" | 1152:72:48<br />(9) (2)
| rowspan="2" | 1.5
| rowspan="2" | 3
| rowspan="2" | 192.0
| rowspan="2" | 192
| rowspan="2" | 72.2<br />81.9
| rowspan="2" | 108.4<br />122.9
| rowspan="2" | 54.2<br />61.4
| rowspan="2" | 3,469.8<br />3,935.2
| rowspan="2" | 108.4<br />122.9
| rowspan="7" | 120
| rowspan="2" | $199
|-
| August 18, 2016
| GP106-300-A1
| rowspan="2" | 4.4
| rowspan="2" | 200
|-
| December 26, 2017
| GP106-350-K3-A1
| rowspan="5" | 1280:80:48<br />(10) (2)
| 1.25
| 5
| 160.0
| 160
| 60.2<br />68.3
| rowspan="5" | 120.4<br />136.7
| rowspan="5" | 60.2<br />68.3
| rowspan="5" | 3,855.3<br />4,375.0
| rowspan="5" | 120.4<br />136.7
| {{okay|OEM}}
|-
| March 8, 2018
| GP104-150-A1
| rowspan="2" | 7.2
| rowspan="2" | 314
| rowspan="4" | 1.5
| rowspan="4" | 6
| rowspan="3" | 192.0
| rowspan="4" | 192
| rowspan="4" | 72.2<br />82.0
| rowspan="2" | $299
|-
| October 18, 2018
| GP104-150-KA-A1
| [[GDDR5X]]
|-
| July 19, 2016
| GP106-400-A1
| rowspan="2" | 4.4
| rowspan="2" | 200
| rowspan="2" | [[GDDR5]]
| $249
| $299
|-
| April 20, 2017
| GP106-410-A1
| 9
| 216.0
| $299
| {{N/a}}
|-
! style="text-align:left;" | GeForce GTX 1070<ref name="GeForce GTX 1070">{{cite web |url=https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1070-ti/ |title=GEFORCE GTX 1070 FAMILY |website=nvidia.com |access-date=2018-12-29 |archive-url=https://web.archive.org/web/20171027024554/https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1070-ti/ |archive-date=2017-10-27 |url-status=live}}</ref><ref>{{cite web |url=https://www.techpowerup.com/250266/nvidia-unveils-geforce-gtx-1070-with-gddr5x-memory |title=NVIDIA Unveils GeForce GTX 1070 with GDDR5X Memory |website=techpowerup.com |date=5 December 2018 |access-date=2018-12-29 |archive-url=https://web.archive.org/web/20181230082620/https://www.techpowerup.com/250266/nvidia-unveils-geforce-gtx-1070-with-gddr5x-memory |archive-date=2018-12-30 |url-status=live}}</ref>
| Jun 10, 2016 / Dec 4, 2018
| GP104-200-A1
| rowspan="4" | 7.2
| rowspan="4" | 314
| rowspan="2" | 1683
| rowspan="2" | 8
| 1920:120:64<br />(15) (3)
| rowspan="4" | 2
| rowspan="4" | 8
| rowspan="2" | 256.0
| [[GDDR5]]<br />[[GDDR5X]]
| rowspan="4" | 256
| 96.3<br/>107.7
| 180.7<br/>201.9
| 90.3<br />100.9
| 5,783.0<br/>6,462.7
| 180.7<br/>201.9
| 150
| rowspan="7" | 4-way [[Scalable Link Interface|SLI]]<br/>or<br/>2-way [[Scalable Link Interface#SLI HB|SLI HB]]<ref>{{cite web|url=https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080_SLI/23.html|title=Nvidia GeForce GTX 1080 SLI|author=((W1zzard))|date=June 21, 2016|publisher=TechPowerUp|access-date=June 21, 2016|archive-url=https://web.archive.org/web/20160624023141/http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080_SLI/23.html|archive-date=June 24, 2016|url-status=live}}</ref>
| $379
| $449
|-
! style="text-align:left;" | GeForce GTX 1070 Ti<ref name="GeForce GTX 1070" />
| November 2, 2017
| GP104-300-A1
| rowspan="3" | 1607
| 2432:152:64<br />(19) (4)
| [[GDDR5]]
| 102.8<br/>107.7
| 244<br />256
| 122.1<br />127.9
| 7,816.4<br/>8,186.1
| 244.2<br/>255.8
| rowspan="3" | 180
| colspan="2" | $449
|-
! rowspan="2" style="text-align:left;" | GeForce GTX 1080<ref name="GeForce GTX 1080">{{cite web|url=https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080/|title=GTX 1080 Graphics Card|author1=[[Nvidia]]|access-date=May 7, 2016|archive-url=https://web.archive.org/web/20161222152023/https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080/|archive-date=December 22, 2016|url-status=live}}</ref><ref>{{cite web|url=http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080/3.html|title=Nvidia GeForce GTX 1080 8 GB|author=((W1zzard))|date=May 17, 2016|publisher=TechPowerUp|access-date=May 17, 2016|archive-url=https://web.archive.org/web/20160521031101/http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080/3.html|archive-date=May 21, 2016|url-status=live}}</ref>
| May 27, 2016
| GP104-400-A1
| rowspan="2" | 1733
| 10
| rowspan="2" | 2560:160:64<br />(20) (4)
| 320.0
| rowspan="5" | [[GDDR5X]]
| rowspan="2" | 102.8<br />110.9
| rowspan="2" | 257.1<br />277.2
| rowspan="2" | 128.5<br />138.6
| rowspan="2" | 8,227.8<br />8,872.9
| rowspan="2" | 257.1<br />277.2
| rowspan="2" | $599
| rowspan="2" | $699
|-
| April 20, 2017
| GP104-410-A1
| rowspan="2" | 11
| 352.0
|-
! style="text-align:left;" | GeForce GTX 1080 Ti<ref name="GeForce GTX 1080 Ti">{{cite web|url=https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080-ti/|title=GeForce GTX 1080 Ti Graphics Card|author1=[[Nvidia]]|access-date=March 1, 2017|archive-url=https://web.archive.org/web/20170301191005/https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080-ti/|archive-date=March 1, 2017|url-status=live}}</ref>
| March 5, 2017
| GP102-350-K1-A1
| rowspan="3" | 12
| rowspan="3" | 471
| 1480
| 1582
| 3584:224:88<br />(28) (6)
| 2.75
| 11
| 484.0
| 352
| 130.2<br />139.2
| 331.5<br />354.3
| 165.7<br />177.1
| 10,608.6<br />11,339.7
| 331.5<br />354.3
| rowspan="3" | 250
| colspan="2" | $699
|-
! style="text-align:left;" | TITAN X Pascal<ref name="Nvidia TITAN X Pascal">{{cite web|url=http://www.geforce.com/hardware/10series/titan-x-pascal|title=Nvidia TITAN X Graphics Card|author1=[[Nvidia]]|access-date=July 21, 2016|archive-url=https://web.archive.org/web/20160722050701/http://www.geforce.com/hardware/10series/titan-x-pascal|archive-date=July 22, 2016|url-status=live}}</ref>
| August 2, 2016
| GP102-400-A1
| 1417
| 1531
| 10
| 3584:224:96<br />(28) (6)
| rowspan="2" |3
| rowspan="2" | 12
| 480.0
| rowspan="2" | 384
| 136.0<br />146.9
| 317.4<br />342.9
| 158.7<br />171.4
| 10,157.0<br />10,974.2
| 317.4<br />342.9
| rowspan="2" | $1199
| rowspan="2" {{N/A}}
|-
! style="text-align:left;" | TITAN Xp<ref name="Nvidia TITAN Xp">{{cite web|url=https://www.nvidia.com/en-us/geforce/products/10series/titan-xp/|title=TITAN Xp Graphics Card with Pascal Architecture|author1=[[Nvidia]]|access-date=April 6, 2017|archive-url=https://web.archive.org/web/20170406223729/https://www.nvidia.com/en-us/geforce/products/10series/titan-xp/|archive-date=April 6, 2017|url-status=live}}</ref>
| April 6, 2017
| GP102-450-A1
| 1405
| 1582
| 11.4
| 3840:240:96<br />(30) (6)
| 547.7
| 134.8<br />142.0
| 337.2<br />355.2
| 168.6<br />177.6
| 10,790.4<br />12,149.7
| 337.2<br />355.2
|-
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Process
! rowspan="2" | Transistors (billion)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! Base core ([[Hertz|MHz]])
! Boost core ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! rowspan="2" | Core config {{efn|name=CoreConfig}}
! rowspan="2" | [[GPU cache|L2 Cache]]<br />([[Mebibyte|MiB]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s) {{efn|name=PixelFillrate}}{{efn|As the GTX 1070 has one of the four GP104 GPCs disabled in the die, its frontend is only able to rasterize 48 pixels per clock.<ref name="smith1"/> Analogically, the GTX 1060 features only two GPCs on its GP106 die, meaning that its frontend can only rasterize 32 pixels per clock. The remaining backend ROPs can still be used for tasks such as MSAA.<ref name="anandtech11"/>}}
! Texture ([[Texel (graphics)|GT]]/s) {{efn|name=TextureFillrate}}
! [[Half precision floating-point format|Half precision]]
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | [[Scalable Link Interface|SLI]] support
! MSRP
! Founders Edition
|-
! colspan="3" | Clock speeds
! colspan="4" | Memory
! colspan="2" | [[Fillrate]] {{efn|name=PerfValues}}
! colspan="3" |Processing power ([[GFLOPS]]) {{efn|name=PerfValues}}{{efn|name=ProcessingPower}}
! colspan="2" | Release price (USD)
|}
{{notelist|refs=
{{efn|name=CoreConfig|Main [[Unified shader model|shader processors]] : [[texture mapping unit]]s : [[render output unit]]s (streaming multiprocessors) (graphics processing clusters)}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.}}
{{efn|name=ProcessingPower|To calculate the processing power see [[Pascal (microarchitecture)#Performance]].}}
{{efn|name=PerfValues|Base clock, Boost clock}}
}}
===Volta series===
{{Further|Volta (microarchitecture)}}
* Supported [[Application programming interface|APIs]]: [[Direct3D]] 12 (12_1), [[OpenGL]] 4.6, [[OpenCL]] 3.0, [[Vulkan (API)|Vulkan]] 1.3<ref name="vulkandrv" /> and [[CUDA]] 7.0
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model{{spaces}}name
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Process
! rowspan="2" | Transistors (billion)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|Interface]]
! colspan="3" | Clock speeds
! rowspan="2" | Core config {{efn|name=CoreConfig}}
! rowspan="2" | [[GPU cache|L2 Cache]]<br />([[Mebibyte|MiB]])
! colspan="4" | Memory
! colspan="2" | [[Fillrate]] {{efn|name=PerfValues}}
! colspan="4" | Processing power ([[GFLOPS]]) {{efn|name=PerfValues}}
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | [[NVLink]] Support
! colspan="2" | Release price (USD)
|-
! Base core clock ([[Hertz|MHz]])
! Boost core clock ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s) {{efn|name=PixelFillrate}}
! Texture ([[Texel (graphics)|GT]]/s) {{efn|name=TextureFillrate}}
! [[Half-precision floating-point format|Half precision]]
! [[Single-precision floating-point format|Single precision]]
! [[Double-precision floating-point format|Double precision]]
! [[Tensor]] compute + Single precision
! MSRP
! Founders Edition
|-
! style="text-align:left;" | Nvidia Titan V<ref>{{cite web|url=https://www.nvidia.com/en-us/titan/titan-v|title=Nvidia TITAN V Graphics Card|author1=[[Nvidia]]|access-date=December 8, 2017|archive-url=https://web.archive.org/web/20171208093132/https://www.nvidia.com/en-us/titan/titan-v/|archive-date=December 8, 2017|url-status=live}}</ref>
| December 7, 2017
| GV100-400-A1
| rowspan="2" | [[TSMC]]<br />[[14 nm process|12FFN]]
| rowspan="2" | 21.1
| rowspan="2" | 815
| rowspan="2" | PCIe 3.0 x16
| rowspan="2" | 1200
| rowspan="2" | 1455
| rowspan="2" | 1.7
| 5120:320 :96:640<br />(80) (6)
| 4.5
| 12
| 652.8
| rowspan="2" |[[HBM2]]
| 3072
| rowspan="2" | 153.6<br />186.2
| rowspan="2" | 384.0<br />465.6
| rowspan="2" | 24,576.0<br />29,798.4
| rowspan="2" | 12,288.0<br />14,899.2
| rowspan="2" | 6,144.0<br />7,449.6
| rowspan="2" | 110,592.0<br />134,092.8
| rowspan="2" | 250
| {{No}}
| $2999
| {{N/a}}
|-
! style="text-align:left;" | Nvidia{{spaces}}Titan{{spaces}}V<br />(CEO{{spaces}}Edition)<ref>{{Cite news|url=https://www.anandtech.com/show/13004/nvidia-limited-edition-32gb-titan-v-ceo-edition|title=NVIDIA Unveils & Gives Away New Limited Edition 32GB Titan V "CEO Edition"|last=Smith|first=Ryan|access-date=2018-08-08|archive-url=https://web.archive.org/web/20180730215157/https://www.anandtech.com/show/13004/nvidia-limited-edition-32gb-titan-v-ceo-edition|archive-date=2018-07-30|url-status=dead}}</ref><ref>{{Cite news|url=https://www.techpowerup.com/gpudb/3277/titan-v-ceo-edition|title=NVIDIA TITAN V - CEO Edition|work=TechPowerUp|access-date=2018-08-08|language=en|archive-url=https://archive.ph/UWyMd|archive-date=2025-08-25}}{{cbignore|bot=medic}}</ref>
| June 21, 2018
| GV100-???-A1
| 5120:320 :128:640<br />(80) (6)
| 6
| 32
| 870.4
| 4096
| {{No}}
| colspan="2" {{N/a}}
|}
{{notelist|refs=
{{efn|name=CoreConfig|Main [[Unified shader model|shader processors]] : [[texture mapping unit]]s : [[render output unit]]s : [[tensor core]]s (streaming multiprocessors) (graphics processing clusters)}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.}}
{{efn|name=PerfValues|Base clock, Boost clock}}
}}
===GeForce 16 series===
{{Further|GeForce 16 series|Turing (microarchitecture)}}
* Supported [[Application programming interface|APIs]]: [[Direct3D]] 12 (feature level 12_1), [[OpenGL]] 4.6, [[OpenCL]] 3.0, [[Vulkan (API)|Vulkan]] 1.3<ref name="vulkandrv" /> and [[CUDA]] 7.5
* [[Nvidia NVENC|NVENC]] 6th generation ([[B frame|B-frame]], etc.)
* TU117 only supports Volta [[Nvidia NVENC|NVENC]] (5th generation)
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model{{spaces}}name
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Process
! rowspan="2" | Transistors (billion)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Bus (computing)|Bus]] [[Input/output|interface]]
! colspan="3" | Clock speeds
! rowspan="2" | Core config {{efn|name=CoreConfig}}
! rowspan="2" | [[GPU cache|L2 Cache]] ([[Mebibyte|MiB]])
! colspan="4" | Memory
! colspan="2" | [[Fillrate]] {{efn|name=PerfValues}}
! colspan="3" |Processing power (G[[FLOPS]]) {{efn|name=PerfValues}}
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | [[NVLink]] support
! rowspan="2" | Release price (USD)
|-
! Base core clock ([[Hertz|MHz]])
! Boost core clock ([[Hertz|MHz]])
! Memory ([[Transfer (computing)|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s) {{efn|name=PixelFillrate}}
! Texture ([[Texel (graphics)|GT]]/s) {{efn|name=TextureFillrate}}
! [[Half-precision floating-point format|Half precision]]
! [[Single-precision floating-point format|Single precision]]
! [[Double-precision floating-point format|Double precision]]
|-
! style="text-align:left;" | GeForce GTX 1630
| June 28, 2022<ref>{{cite web |title=NVIDIA officially launches GeForce GTX 1630 graphics card |url=https://videocardz.com/newz/nvidia-officially-launches-geforce-gtx-1630-graphics-card |access-date=2022-06-28 |website=VideoCardz.com |language=en-US}}</ref>
| TU117-150-A1
| rowspan="8" |[[TSMC]]<br />[[14 nm process|12FFN]]
| rowspan="3" |4.7
| rowspan="3" |200
| rowspan="8" |PCIe 3.0 x16
| 1740
| 1785
| 12
| 512:32 :16:1024:0<br />(8) (?)
| rowspan="4" |1
| rowspan="5" |4
| 96
| [[GDDR6 SDRAM|GDDR6]]
| 64
| 27.84 <br />28.56
| 55.68<br />57.12
| 3,563.52<br />3,655.68
| 1,781.76<br />1,827.84
| 55.68<br />57.12
| rowspan="3" |75
| rowspan="8" {{No}}
| {{Dunno}}
|-
! rowspan="3" style="text-align:left;" | GeForce GTX 1650<ref>{{cite web|url=https://www.nvidia.com/en-us/geforce/graphics-cards/gtx-1650/|title=NVIDIA GeForce GTX 1650 Graphics Card|website=NVIDIA|access-date=2019-04-23|archive-url=https://web.archive.org/web/20190423131601/https://www.nvidia.com/en-us/geforce/graphics-cards/gtx-1650/|archive-date=2019-04-23|url-status=live}}</ref>
| April 23, 2019
| rowspan="2" |TU117-300-A1
| 1485
| 1665
| 8
| rowspan="3" |896:56 :32:1792:0<br />(14) (2)
| 128
| [[GDDR5 SDRAM|GDDR5]]
| rowspan="4" |128
| 47.52<br />53.28
| 83.16 <br />93.24
| 5,322<br />5,967
| 2,661<br />2,984
| 83.16<br />93.24
| rowspan="3" |$149
|-
| April 3, 2020<ref>{{cite web|url=https://www.anandtech.com/show/15701/nvidias-geforce-gtx-1650-gddr6-released-gddr5-price-parity|title=NVIDIA's GeForce GTX 1650 GDDR6 Released: GDDR6 Reaching Price Parity With GDDR5|website=Anandtech|access-date=2020-04-06|archive-date=7 December 2023|archive-url=https://web.archive.org/web/20231207025426/http://www4.anandtech.com/show/15701/nvidias-geforce-gtx-1650-gddr6-released-gddr5-price-parity|url-status=dead}}</ref>
| rowspan="2" |1410
| rowspan="2" |1590
| rowspan="3" |12
| rowspan="4" |192
| rowspan="3" |[[GDDR6 SDRAM|GDDR6]]
| rowspan="2" | 45.12<br />50.88
| rowspan="2" | 78.96<br />89.04
| rowspan="2" | 5,053.44<br />5,698.56
| rowspan="2" | 2,526.72<br />2,849.28
| rowspan="2" | 78.96<br />89.04
|-
| June 18, 2020<ref>{{cite web|title=NVIDIA GeForce GTX 1650 TU106 Specs|url=https://www.techpowerup.com/gpu-specs/geforce-gtx-1650-tu106.c3585|access-date=2021-12-14|website=TechPowerUp|language=en|archive-date=23 November 2020|archive-url=https://web.archive.org/web/20201123174607/https://www.techpowerup.com/gpu-specs/geforce-gtx-1650-tu106.c3585|url-status=live}}</ref>
| TU106-125-A1
| 10.8
| 445
| 90
|-
! style="text-align:left;" | GeForce GTX 1650 Super<ref>{{cite web|url=https://www.nvidia.com/en-gb/geforce/graphics-cards/gtx-1650-super/|title=NVIDIA GeForce GTX 1650 SUPER Graphics Card|website=NVIDIA|access-date=2019-10-29|archive-date=29 October 2019|archive-url=https://web.archive.org/web/20191029131835/https://www.nvidia.com/en-gb/geforce/graphics-cards/gtx-1650-super/|url-status=live}}</ref>
| November 22, 2019<ref>{{cite web|url=https://www.anandtech.com/show/15041/nvidia-announces-geforce-gtx-1650-super-launching-november-22nd|title=NVIDIA Announces GeForce GTX 1650 Super: Launching November 22nd|access-date=2019-10-29|archive-date=29 October 2019|archive-url=https://web.archive.org/web/20191029171213/https://www.anandtech.com/show/15041/nvidia-announces-geforce-gtx-1650-super-launching-november-22nd|url-status=dead}}</ref>
| TU116-250-KA-A1<ref>{{cite web|url=https://www.cnet.com/news/gpu-memory-memory-bandwidth-memory-clock-gpu-clock-speed-memory-data-rateinterface-texture-fill-rate-ray-tracing-rt/|title=GTX 1660, 1650 Super boost speeds for Nvidia's cheapest gaming cards|access-date=2019-10-29|archive-date=29 October 2019|archive-url=https://web.archive.org/web/20191029151929/https://www.cnet.com/news/gpu-memory-memory-bandwidth-memory-clock-gpu-clock-speed-memory-data-rateinterface-texture-fill-rate-ray-tracing-rt/|url-status=live}}</ref>
| rowspan="4" |6.6
| rowspan="4" |284
| rowspan="3" |1530
| 1725
| 1280:80 :32:2560:0<br />(20) (3)
| rowspan="4" |1.5
| 48.96<br />55.20
| 122.40<br />138.0
| 7,833.60 <br />8,832
| 3,916.80<br />4,416
| 122.40<br />138.00
| 100
| $159
|-
! style="text-align:left;" | GeForce GTX 1660<ref>{{cite web|url=https://www.nvidia.com/en-us/geforce/graphics-cards/gtx-1660-ti/|title=The GeForce 16 Series Graphics Cards are Here|website=NVIDIA|language=en-us|access-date=2019-03-23|archive-url=https://web.archive.org/web/20190325235255/https://www.nvidia.com/en-us/geforce/graphics-cards/gtx-1660-ti/|archive-date=2019-03-25|url-status=live}}</ref>
| March 14, 2019
| TU116-300-A1
| rowspan="2" |1785
| 8
| rowspan="2" |1408:88 :48:2816:0<br />(22) (3)
| rowspan="3" |6
| [[GDDR5 SDRAM|GDDR5]]
| rowspan="3" |192
| rowspan="2" |73.44<br />85.68
| rowspan="2" |134.64<br />157.1
| rowspan="2" |8,616<br />10,053
| rowspan="2" |4,308<br />5,027
| rowspan="2" |134.64<br />157.08
| 120
| $219
|-
! style="text-align:left;" | GeForce GTX 1660 Super<ref>{{cite web|url=https://www.nvidia.com/en-gb/geforce/graphics-cards/gtx-1660-super/|title=NVIDIA GeForce GTX 1660 SUPER Graphics Card|website=NVIDIA|access-date=2019-10-29|archive-date=29 October 2019|archive-url=https://web.archive.org/web/20191029131837/https://www.nvidia.com/en-gb/geforce/graphics-cards/gtx-1660-super/|url-status=live}}</ref>
| October 29, 2019
| TU116-300-A1
| 14
| 336
| rowspan="2" |[[GDDR6 SDRAM|GDDR6]]
| 125
| $229
|-
! style="text-align:left;" | GeForce GTX 1660 Ti<ref>{{cite web|url=https://www.nvidia.com/en-us/geforce/graphics-cards/gtx-1660-ti/|title=NVIDIA GeForce RTX 1660 Graphics Card|website=NVIDIA|access-date=2019-02-22|archive-url=https://web.archive.org/web/20190222141328/https://www.nvidia.com/en-us/geforce/graphics-cards/gtx-1660-ti/|archive-date=2019-02-22|url-status=live}}</ref>
| February 21, 2019
| TU116-400-A1
| 1500
| 1770
| 12
| 1536:96 :48:3072:0<br />(24) (3)
| 288
| 72.0<br />84.96
| 144.0<br />169.9
| 9,216<br />10,874.88
| 4,608<br />5,437.44
| 144.00<br />169.92
| 120
| $279
|}
===RTX 20 series===
{{Further|GeForce 20 series|Turing (microarchitecture)}}
* Supported [[Application programming interface|APIs]]: [[Direct3D]] 12 Ultimate (12_2), [[OpenGL]] 4.6, [[OpenCL]] 3.0, [[Vulkan (API)|Vulkan]] 1.3<ref name="vulkandrv" /> and [[CUDA]] 7.5
* Unlike previous generations the RTX Non-Super (RTX 2070, RTX 2080, RTX 2080 Ti) Founders Edition cards no longer have reference clocks, but are "Factory-OC". However, RTX Supers (RTX 2060 Super, RTX 2070 Super, and RTX 2080 Super) Founders Edition are reference clocks.
* [[NVENC]] 6th generation ([[B frame|B-frame]], etc.)
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model{{spaces}}name
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Process
! rowspan="2" | Transistors (billion)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speeds
! rowspan="2" | Core config {{efn|name=CoreConfig}}
! rowspan="2" | [[GPU cache|L2 Cache]]<br />([[Mebibyte|MiB]])
! colspan="4" | Memory
! colspan="2" | [[Fillrate]]{{efn|name=PerfValues}}
! colspan="4" | Processing power (T[[FLOPS]]) {{efn|name=PerfValues}}
! colspan="2" | [[Ray tracing (graphics)|Ray-tracing]] Performance
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | [[NVLink]] support
! colspan="2" | Release price (USD)
|-
! Base core clock ([[Hertz|MHz]])
! Boost core clock ([[Hertz|MHz]])
! Memory ([[Transfer (computing)|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s){{efn|name=PixelFillrate}}
! Texture ([[Texel (graphics)|GT]]/s) {{efn|name=TextureFillrate}}
! [[Half precision floating-point format|Half precision]]
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Tensor]] compute (FP16)
! Rays/s (Billions)
! RTX OPS/s (Trillions)
! MSRP
! Founders Edition
|-
! rowspan="3" style="text-align:left;" | GeForce RTX 2060<ref name=":3">{{cite web|url=https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2060/|title=NVIDIA GeForce RTX 2060 Graphics Card|website=NVIDIA|access-date=2019-01-08|archive-url=https://web.archive.org/web/20190107124509/https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2060/|archive-date=2019-01-07|url-status=live}}</ref>
| January 15, 2019
| TU106-200-KA-A1
| rowspan="10" | [[TSMC]]<br />[[14 nm process|12FFN]]
| 10.8
| 445
| rowspan="10" | PCIe 3.0 x16
| rowspan="2" | 1365
| rowspan="2" | 1680
| rowspan="7" | 14
| rowspan="2" | 1920:120 :48:240:30<br />(30) (3)
| rowspan="3" | 3
| rowspan="2" | 6
| rowspan="3" | 336.0
| rowspan="10" | [[GDDR6 SDRAM|GDDR6]]
| rowspan="3" | 192
| rowspan="2" | 65.52<br />80.64
| rowspan="2" | 163.80<br />201.60
| rowspan="2" | 10.48<br />12.90
| rowspan="2" | 5.242<br />6.451
| rowspan="2" | 0.1638<br />0.2016
| rowspan="2" | 41.93<br />51.61
| rowspan="2" | 5
| rowspan="2" | 37
| rowspan="2" | 160
| rowspan="5" {{No}}
| $349
| rowspan="3" {{N/a}}
|-
| January 10, 2020
| TU104-150-KC-A1<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-2060-tu104.c3495|title=NVIDIA GeForce RTX 2060 TU104 Specs|website=TechPowerUp|language=en|access-date=2020-02-18}}</ref>
| 13.6
| 545
| rowspan="2" | $299
|-
| December 7, 2021<ref>{{cite web |url=https://videocardz.com/newz/nvidia-launches-geforce-rtx-2060-12gb-a-perfect-card-for-crypto-miners |title=NVIDIA launches GeForce RTX 2060 with 12GB and TU106-300 GPU, overpriced gaming GPU for miners|website=videocardz|language=en|access-date=2021-07-09}}</ref>
| TU106-300-KA-A1
| rowspan="3" | 10.8
| rowspan="3" | 445
| rowspan="2" | 1470
| rowspan="2" | 1650
| rowspan="2" | 2176:136 :64:272:34<br />(34) (3)
| 12
| 79.20
| rowspan="2" | 199.92<br />224.40
| rowspan="2" | 12.80<br />14.36
| rowspan="2" | 6.400<br />7.180
| rowspan="2" | 0.1999<br />0.2244
|
|
|
| 185
|-
! style="text-align:left;" | GeForce RTX 2060 Super<ref name="anandtech_super">{{cite web|url=https://www.anandtech.com/show/14586/geforce-rtx-2070-super-rtx-2060-super-review|title=The GeForce RTX 2070 Super & RTX 2060 Super Review|website=Anandtech|date=2019-07-02|access-date=2019-07-02|archive-url=https://web.archive.org/web/20190702140732/https://www.anandtech.com/show/14586/geforce-rtx-2070-super-rtx-2060-super-review|archive-date=2019-07-02|url-status=dead}}</ref><ref>{{cite web |url=https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2060-super/ |title=Your Graphics, Now with SUPER Powers |access-date=2019-07-02 |archive-url=https://web.archive.org/web/20190702140732/https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2060-super/ |archive-date=2019-07-02 |url-status=live}}</ref>
| July 9, 2019
| TU106-410-A1
| rowspan="5" | 4
| rowspan="5" | 8
| rowspan="4" | 448.0
| rowspan="5" | 256
| 94.08<br />105.60
| 51.20<br />57.44
| rowspan="2" | 6
| 41
| rowspan="2" | 175
| colspan="2" | $399
|-
! style="text-align:left;" | GeForce RTX 2070<ref name="Geforce RTX 2070">{{cite web |url=https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2070/ |title=Introducing NVIDIA GeForce RTX 2070 Graphics Card |website=NVIDIA |language=en-us |access-date=2018-08-20 |archive-url=https://web.archive.org/web/20180820234826/https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2070/ |archive-date=2018-08-20 |url-status=live}}</ref><ref>{{cite web |url=https://www.pcgamer.com/nvidia-turing-architecture-deep-dive/ |title=Nvidia Turing architecture deep dive |website=pcgamer.com |language=en-us |access-date=2018-09-27 |archive-url=https://web.archive.org/web/20180918231242/https://www.pcgamer.com/nvidia-turing-architecture-deep-dive/ |archive-date=2018-09-18 |url-status=live}}</ref>
| October 17, 2018
| TU106-400-A1
| 1410
| 1620
| 2304:144 :64:288:36<br />(36) (3)
| 90.24<br />103.68
| 203.04<br />233.28
| 12.99<br />14.93
| 6.497<br />7.465
| 0.2030<br />0.2333
| 51.98<br />59.72
| 42
| rowspan="2" | $499
| $599
|-
! style="text-align:left;"| GeForce RTX 2070 Super<ref name="anandtech_super" /><ref>{{cite web |url=https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2070-super/ |title=Your Graphics, Now with SUPER Powers |access-date=2019-07-02 |archive-url=https://web.archive.org/web/20190702140730/https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2070-super/ |archive-date=2019-07-02 |url-status=live}}</ref>
| July 9, 2019
| TU104-410-A1
| rowspan="3" | 13.6
| rowspan="3" | 545
| 1605
| 1770
| 2560:160 :64:320:40<br />(40) (5)
| 102.70<br />113.28
| 256.80<br />283.20
| 16.44<br />18.12
| 8.220<br />9.060
| 0.2568<br />0.2832
| 65.76<br />72.48
| 7
| 52
| rowspan="2" | 215
| rowspan="5" | 2-way [[NVLink]]
| $499
|-
! style="text-align:left;"| GeForce RTX 2080<ref name="Geforce RTX 2080">{{cite web |url=https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080/ |title=NVIDIA GeForce RTX 2080 Founders Edition Graphics Card |website=NVIDIA |language=en-us |access-date=2018-08-20 |archive-url=https://web.archive.org/web/20180820234828/https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080/ |archive-date=2018-08-20 |url-status=live}}</ref><ref>{{cite news |url=https://www.pcmag.com/news/363762/nvidia-can-automatically-overclock-your-geforce-rtx-in-20-mi |title=Nvidia Can Automatically Overclock Your GeForce RTX in 20 Minutes |newspaper=Pcmag |language=en-us |access-date=2018-09-20 |archive-url=https://web.archive.org/web/20180920160829/https://www.pcmag.com/news/363762/nvidia-can-automatically-overclock-your-geforce-rtx-in-20-mi |archive-date=2018-09-20 |url-status=live}}</ref>
| September 20, 2018
| TU104-400-A1
| 1515
| 1710
| 2944:184 :64:368:46<br />(46) (6)
| 96.96<br />109.44
| 278.76<br />314.64
| 17.84<br />20.14
| 8.920<br />10.07
| 0.2788<br />0.3146
| 71.36<br />80.55
| rowspan="2" | 8
| 57
| rowspan="2" | $699
| $799
|-
! style="text-align:left;"| GeForce RTX 2080 Super<ref name="anandtech_super" /><ref>{{cite web |url=https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-super/ |title=Your Graphics, Now with SUPER Powers |access-date=2019-07-02 |archive-url=https://web.archive.org/web/20190702140731/https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-super/ |archive-date=2019-07-02 |url-status=live}}</ref>
| July 23, 2019
| TU104-450-A1
| 1650
| 1815
| 15.5
| 3072:192 :64:384:48<br />(48) (6)
| 496.0
| 105.60<br />116.16
| 316.80<br />348.48
| 20.28<br />22.30
| 10.14<br />11.15
| 0.3168<br />0.3485
| 81.12<br />89.20
| 63
| rowspan="2" | 250
| $699
|-
! style="text-align:left;"| GeForce RTX 2080 Ti<ref name="Geforce RTX 2080 Ti">{{cite web|url=https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-ti/|title=Graphics Reinvented: NVIDIA GeForce RTX 2080 Ti Graphics Card|website=NVIDIA|language=en-us|access-date=2018-08-20|archive-url=https://web.archive.org/web/20180820170549/https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-ti/|archive-date=2018-08-20|url-status=live}}</ref>
| September 27, 2018
| TU102-300-K1-A1
| rowspan="2" | 18.6
| rowspan="2" | 754
| rowspan="2" | 1350
| 1545
| rowspan="2" | 14
| 4352:272 :88:544:68<br />(68) (6)
| 5.5
| 11
| 616.0
| 352
| 118.80<br />135.96
| 367.20<br />420.24
| 23.50<br />26.90
| 11.75<br />13.45
| 0.3672<br />0.4202
| 94.00<br />107.6
| 10
| 76
| $999
| $1,199
|-
! style="text-align:left;"| Nvidia Titan RTX<ref name="Titan RTX">{{cite web|url=https://www.nvidia.com/en-us/titan/titan-rtx/|title=TITAN RTX Ultimate PC Graphics Card with Turing: NVIDIA|website=nvidia.com|access-date=2018-12-27|archive-url=https://web.archive.org/web/20181226141249/https://www.nvidia.com/en-us/titan/titan-rtx/|archive-date=2018-12-26|url-status=live}}</ref>
| December 18, 2018
| TU102-400-A1
| 1770 {{efn|name=TitanRTXBoost}}
| 4608:288 :96:576:72<br />(72) (6)
| 6
| 24
| 672.0
| 384
| 129.60<br />169.92
| 388.80<br />509.76
| 24.88<br />32.62
| 12.44<br />16.31
| 0.3888<br />0.5098
| 99.53<br />130.5
| 11
| 84
| 280
| {{N/a}}
| $2,499
|}
{{notelist|refs=
{{efn|name=CoreConfig|Main [[Unified shader model|shader processors]] : [[texture mapping unit]]s : [[render output unit]]s : [[tensor core]]s (or FP16 cores in GeForce 16 series) : [[Ray tracing (graphics)|ray-tracing]] cores (streaming multiprocessors) (graphics processing clusters)}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.}}
{{efn|name=PerfValues|Base clock, Boost clock}}
{{efn|name=TitanRTXBoost|Boost of the Founders Editions, as there is no reference version of this card.}}
}}
=== RTX 30 series ===
{{Further|GeForce 30 series|Ampere (microarchitecture)}}
* Supported [[Application programming interface|APIs]]: [[Direct3D]] 12 Ultimate (12_2), [[OpenGL]] 4.6, [[OpenCL]] 3.0, [[Vulkan (API)|Vulkan]] 1.3<ref name="vulkandrv" /> and [[CUDA]] 8.6
* Supported display connections: [[HDMI]] 2.1, [[DisplayPort]] 1.4a
* [[NVENC]] 7th generation
* [[Tensor Core|Tensor core]] 3rd gen
* [[RT Core]] 2nd gen
* RTX IO
* Improved [[Nvidia NVDEC|NVDEC]] with [[AV1]] decode
* NVIDIA [[DLSS]] 2.0
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model{{spaces}}name
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Process
! rowspan="2" | Transistors (billion)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Bus (computing)|Bus]] [[Input/output|interface]]
! colspan="3" | Clock speeds
! rowspan="2" | Core config {{efn|name=CoreConfig}}
! rowspan="2" | [[GPU cache|L2 Cache]]<br />([[Mebibyte|MiB]])
! colspan="4" | Memory
! colspan="2" | [[Fillrate]]
! colspan="4" | Processing power (T[[FLOPS]])
! rowspan="2" | [[Ray tracing (graphics)|Ray-tracing]] Performance ([[TFLOPS]])
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | [[NVLink]] support
! colspan="2" | Release price (USD)
|-
! Base core clock ([[Hertz|MHz]])
! Boost core clock ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! [[Half-precision floating-point format|Half precision]]
! [[Single-precision floating-point format|Single precision]]
! [[Double-precision floating-point format|Double precision]]
! [[Tensor]] compute (FP16) (2:1 sparse)
! MSRP
! Founders Edition
|-
! style="text-align:left;" rowspan="4" | GeForce RTX 3050<ref>{{cite web |url=https://www.nvidia.com/fi-fi/geforce/graphics-cards/30-series/rtx-3050/ |access-date=2022-01-04 |title=NVIDIA GeForce RTX 3050 Graphics Card Announcement |website=NVIDIA |language=fi |archive-date=29 March 2024 |archive-url=https://web.archive.org/web/20240329045731/https://www.nvidia.com/fi-fi/geforce/graphics-cards/30-series/rtx-3050/ |url-status=live}}</ref>
| February 2, 2024
| GA107-325
| rowspan="17" |[[Samsung Electronics|Samsung]]<br />[[10 nm process|8LPP]]
| rowspan="2" | 8.7
| rowspan="2" | 200
| rowspan="4" | PCIe 4.0<br />x8
| 1042
| 1470
| rowspan="4" | 14
| 2304:72 :32:72:18<br />(18) (2)
| rowspan="4" | 2
| 6
| 168.0
| rowspan="9" | [[GDDR6 SDRAM|GDDR6]]
| 96
| 33.34<br />47.04
| 75.02<br />105.8
| 4.802<br />6.774
| 4.802<br />6.774
| 0.075<br />0.105
| 30.1<br />60.2
|
| 70
| rowspan="15" {{No}}
| $169
| rowspan="8" {{N/a}}
|-
| December 16, 2022
| GA107-150-A1
| 1552
| 1777
| 2560:80 :32:80:20<br />(20) (2)
| rowspan="4" | 8
| rowspan="3" | 224.0
| rowspan="4" | 128
| 49.6<br />59.86
| 124.2<br />142.2
| 7.95<br />9.01
| 7.95<br />9.01
| 0.124<br />0.142
| 63.6<br />72.8
| 18.2
| 115
| $249
|-
| July 18, 2022
| GA106-150
| rowspan="5" | 13.25
| rowspan="5" | 276
| 1515
| 1755
| 2304:72 :32:72:18<br />(18) (2)
| 48.48<br />56.16
| 109.1<br />126.4
| 6.981<br />8.087
| 6.981<br />8.087
| 0.109<br />0.126
|
|
| rowspan="2" | 130
| {{okay|OEM}}<ref>{{cite web |last=Klotz |first=Aaron |date=July 18, 2022 |title=OEM Exclusive RTX 3050 Confirmed With Cutdown Specs |url=https://www.tomshardware.com/news/oem-exclusive-rtx-3050-neutered-core-specs |website=Tom's Hardware |language=en-US |access-date=September 19, 2022 |archive-date=30 April 2024 |archive-url=https://web.archive.org/web/20240430105811/https://www.tomshardware.com/news/oem-exclusive-rtx-3050-neutered-core-specs |url-status=live}}</ref>
|-
| January 27, 2022<ref>{{cite web |url=https://mugens-reviews.de/builds/pc/neueste-nvidia-grafikkarten/ |access-date=2022-01-06 |title=NVIDIA Announces the GeForce RTX 30 Series |website=Mugens-Reviews |language=de |archive-date=23 February 2024 |archive-url=https://web.archive.org/web/20240223155146/https://mugens-reviews.de/builds/pc/neueste-nvidia-grafikkarten/ |url-status=live}}</ref>
| GA106-150-A1
| 1552
| 1777
| 2560:80 :32:80:20<br />(20) (2)
| 49.6<br />59.86
| 124.2<br />142.2
| 7.95<br />9.01
| 7.95<br />9.01
| 0.124<br />0.142
| 63.6<br />72.8
| 18.2
| $249
|-
! style="text-align:left;" rowspan="4" | GeForce RTX 3060<ref>{{cite web |url=https://www.nvidia.com/en-us/geforce/news/geforce-rtx-3060/ |access-date=2021-01-12 |title=NVIDIA GeForce RTX 3060 Graphics Card Announcement |website=NVIDIA |archive-date=26 February 2022 |archive-url=https://web.archive.org/web/20220226224801/https://www.nvidia.com/en-us/geforce/news/geforce-rtx-3060/ |url-status=live}}</ref>
| October 27, 2022<ref name="videocardz.com">{{Cite web |title=NVIDIA officially introduces GeForce RTX 3060 8GB and RTX 3060 Ti GDDR6X |url=https://videocardz.com/newz/nvidia-officially-introduces-geforce-rtx-3060-8gb-and-rtx-3060-ti-gddr6x |access-date=2022-10-29 |website=VideoCardz.com |language=en-US |archive-date=29 October 2022 |archive-url=https://web.archive.org/web/20221029001133/https://videocardz.com/newz/nvidia-officially-introduces-geforce-rtx-3060-8gb-and-rtx-3060-ti-gddr6x |url-status=live}}</ref>
| rowspan="2" |GA106-302
| rowspan="13" |PCIe 4.0<br />x16
| rowspan="4" |1320
| rowspan="4" |1777
| rowspan="4" |15
| rowspan="4" |3584:112 :48:112:28<br />(28) (3)
| rowspan="4" |3
| 240.0
| rowspan="4" |63.4<br />85.3
| rowspan="4" |147.8<br />199.0
| rowspan="4" |9.46<br />12.74
| rowspan="4" |9.46<br />12.74
| rowspan="4" |0.148<br />0.199
| rowspan="4" |75.7<br />101.9
| rowspan="4" |25
| rowspan="4" |170
| rowspan="4" |$329
|- / *** 3060***
| May 2021
| rowspan="3" |12
| rowspan="3" |360.0
| rowspan="3" |192
|- / *** 3060***
| February 25, 2021
| GA106-300-A1
|-
| September 1, 2021
| GA104-150-A1<ref>{{cite web|last=Mujtaba|first=Hassan|date=2021-09-25|title=Custom GALAX & Gainward GeForce RTX 3060 Cards With NVIDIA Ampere GA104 GPUs Listed|url=https://wccftech.com/custom-galax-gainward-geforce-rtx-3060-cards-with-nvidia-ampere-ga104-gpus-listed/|access-date=2021-09-25|website=Wccftech|language=en-US|archive-date=13 November 2021|archive-url=https://web.archive.org/web/20211113155243/https://wccftech.com/custom-galax-gainward-geforce-rtx-3060-cards-with-nvidia-ampere-ga104-gpus-listed/|url-status=live}}</ref>
| rowspan="5" |17.4
| rowspan="5" |392.5
|- 3060 / ***
! rowspan="2" style="text-align:left;" | GeForce RTX 3060 Ti<ref>{{cite web |url=https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3060-ti/ |access-date=2020-12-01 |title=NVIDIA GeForce RTX 3060 Ti Graphics Card |archive-date=12 January 2021 |archive-url=https://web.archive.org/web/20210112132428/https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3060-ti/ |url-status=live}}</ref>
| December 2, 2020
| GA104-200-A1
| rowspan="2" | 1410
| rowspan="2" | 1665
| 14
| rowspan="2" | 4864:152 :80:152:38<br />(38) (5)
| rowspan="4" |4
| rowspan="4" |8
| 448.0
| rowspan="4" | 256
| rowspan="2" | 112.8<br />133.2
| rowspan="2" | 214.3<br />253.1
| rowspan="2" | 13.70<br />16.20
| rowspan="2" | 13.72<br />16.20
| rowspan="2" | 0.214<br />0.253
| rowspan="2" | 109.7<br />129.6
| rowspan="2" | 32.4
| rowspan="2" | 200
| colspan="2" rowspan="2" | $399
|-
| October 27, 2022<ref name="videocardz.com"/>
| GA104-202
| 19
| 608.0
| [[GDDR6 SDRAM#GDDR6X|GDDR6X]]
|- 3070 / ***
! style="text-align:left;" | GeForce RTX 3070<ref>{{cite web |url=https://www.nvidia.com/en-gb/geforce/graphics-cards/30-series/rtx-3070/ |access-date=2020-09-06 |title=NVIDIA GeForce RTX 3070 Graphics Card |website=NVIDIA |archive-date=14 May 2021 |archive-url=https://web.archive.org/web/20210514170715/https://www.nvidia.com/en-gb/geforce/graphics-cards/30-series/rtx-3070/ |url-status=live}}</ref><ref name=":4">{{cite web|last=Smith|first=Ryan|date=September 1, 2020|title=NVIDIA Announces the GeForce RTX 30 Series: Ampere For Gaming, Starting With RTX 3080 & RTX 3090|url=https://www.anandtech.com/show/16057/nvidia-announces-the-geforce-rtx-30-series-ampere-for-gaming-starting-with-rtx-3080-rtx-3090|access-date=2020-09-02|website=AnandTech|archive-date=12 January 2022|archive-url=https://web.archive.org/web/20220112113553/https://www.anandtech.com/show/16057/nvidia-announces-the-geforce-rtx-30-series-ampere-for-gaming-starting-with-rtx-3080-rtx-3090|url-status=dead}}</ref>
| October 29, 2020<ref>{{cite web|url=https://www.nvidia.com/en-us/geforce/news/geforce-rtx-3070-available-october-29/|title=GeForce RTX 3070 Availability Update|website=NVIDIA|access-date=23 May 2024|archive-date=11 January 2022|archive-url=https://web.archive.org/web/20220111103856/https://www.nvidia.com/en-us/geforce/news/geforce-rtx-3070-available-october-29/|url-status=live}}</ref>
| GA104-300-A1
| 1500
| 1725
| 14
| 5888:184 :96:184:46<br />(46) (6)
| 448.0
| [[GDDR6 SDRAM|GDDR6]]
| 144.0<br />165.6
| 276.0<br />317.4
| 17.66<br />20.31
| 17.66<br />20.31
| 0.276<br />0.318
| 141.31<br />162.98
| 40.6
| 220
| colspan="2" | $499
|- /*** Ti***/
! style="text-align:left;" | GeForce RTX 3070 Ti<ref>{{cite web |url=https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3070-3070ti/ |access-date=2021-06-02 |title=NVIDIA GeForce RTX 3070 Family |website=NVIDIA |archive-date=26 February 2022 |archive-url=https://web.archive.org/web/20220226223558/https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3070-3070ti/ |url-status=live}}</ref>
| June 10, 2021
| GA104-400-A1
| 1575
| 1770
| rowspan="4" |19
| 6144:192 :96:192:48<br />(48) (6)
| 608.3
| rowspan="6" |[[GDDR6 SDRAM#GDDR6X|GDDR6X]]
| 151.18<br />169.9
| 302.36<br />339.8
| 19.35<br />21.75
| 19.35<br />21.75
| 0.302<br />0.340
| 154.8<br />174.0
| 43.5
| 290
| colspan=2 | $599
|- /*** 3080 ***/
! rowspan="2" style="text-align:left;" | GeForce RTX 3080<ref>{{cite web |url=https://www.nvidia.com/en-gb/geforce/graphics-cards/30-series/rtx-3080/ |access-date=2020-09-06 |title=NVIDIA GeForce RTX 3080 Graphics Card |website=NVIDIA |archive-date=19 May 2021 |archive-url=https://web.archive.org/web/20210519061929/https://www.nvidia.com/en-gb/geforce/graphics-cards/30-series/rtx-3080/ |url-status=live}}</ref><ref name=":4"/>
| September 17, 2020
| GA102-200-A1
| rowspan="5" |28.3
| rowspan="5" |628.4
| 1440
| rowspan="2" | 1710
| 8704:272 :96:272:68<br />(68) (6)
| 5
| 10
| 760.0
| 320
| 138.2<br />164.2
| 391.68<br />465.12
| 25.06<br />29.76
| 25.07<br />29.77
| 0.392<br />0.465
| 200.54<br />238.14
| 59.5
| 320
| colspan=2 | $699
|-
| January 27, 2022
| GA102-220-A1
| 1260
| 8960:280 :96:280:70<br />(70) (6)
| rowspan="4" |6
| rowspan="2" |12
| rowspan="2" |912.0
| rowspan="4" |384
| 131.0<br />177.8
| 352.8<br />478.8
| 22.6<br />30.6
| 22.6<br />30.6
| 0.353<br />0.479
| 180.6<br />245.1
| 61.3
| rowspan="3" | 350
| colspan=2 | $799
|- 3080 / ***
! style="text-align:left;" | GeForce RTX 3080 Ti<ref>{{cite web |url=https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3080-3080ti/ |access-date=2021-06-02 |title=NVIDIA GeForce RTX 3080 Family of Graphics Card |website=NVIDIA |archive-date=1 March 2022 |archive-url=https://web.archive.org/web/20220301194515/https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3080-3080ti/ |url-status=live}}</ref>
| June 3, 2021
| GA102-225-A1
| 1365
| 1665
| 10240:320 :112:320:80<br />(80) (7)
| 153.5<br />186.5
| 438.5<br />532.8
| 28.06<br />34.10
| 28.57<br />34.1
| 0.438<br />0.533
| 228.6<br />272.8
| 68.2
| colspan=2 | $1199
|- 3090 / ***
! style="text-align:left;" | GeForce RTX 3090<ref>{{cite web |url=https://www.nvidia.com/en-gb/geforce/graphics-cards/30-series/rtx-3090/ |access-date=2020-09-06 |title=NVIDIA GeForce RTX 3090 Graphics Card |website=NVIDIA |archive-date=26 February 2022 |archive-url=https://web.archive.org/web/20220226220435/https://www.nvidia.com/en-gb/geforce/graphics-cards/30-series/rtx-3090/ |url-status=live}}</ref><ref name=":4"/>
| September 24, 2020
| GA102-300-A1
| 1395
| 1695
| 19.5
| 10496:328 :112:328:82<br />(82) (7)
| rowspan="2" |24
| 935.8
| 156.2<br />189.8
| 457.6<br />555.96
| 29.38<br />35.68
| 29.28<br />35.58
| 0.459<br />0.558
| 235.08<br />285.48
| 71.1
| rowspan="2" |2-way [[NVLink]]
| colspan=2 | $1499
|- 3090 / ***
! style="text-align:left;" | GeForce RTX 3090 Ti<ref>{{cite web |title=GeForce RTX 3090 Ti Is Here: The Fastest GeForce GPU For The Most Demanding Creators & Gamers |url=https://www.nvidia.com/en-us/geforce/news/geforce-rtx-3090-ti-out-now/ |access-date=2022-04-02 |website=NVIDIA |language=en-us |archive-date=18 April 2024 |archive-url=https://web.archive.org/web/20240418002055/https://www.nvidia.com/en-us/geforce/news/geforce-rtx-3090-ti-out-now/ |url-status=live}}</ref><ref>{{cite web |title=NVIDIA GeForce RTX 3090 Ti Specs |url=https://www.techpowerup.com/gpu-specs/geforce-rtx-3090-ti.c3829 |access-date=2022-04-02 |website=TechPowerUp |language=en |archive-date=23 January 2023 |archive-url=https://web.archive.org/web/20230123200947/https://www.techpowerup.com/gpu-specs/geforce-rtx-3090-ti.c3829 |url-status=live}}</ref>
| March 29, 2022
| GA102-350-A1
| 1560
| 1860
| 21
| 10752:336 :112:336:84<br />(84) (7)
| 1008.3
| 174.7<br />208.3
| 524.2<br />625
| 33.5<br />40
| 33.5<br />40
| 0.524<br />0.625
| 269.1<br />320.9
| 79.9
| 450
| colspan=2 | $1999
|}
{{notelist|refs=
{{efn|name=CoreConfig|Main [[Unified shader model|shader processors]] : [[texture mapping unit]] : [[render output unit]]s : [[tensor core]]s : [[Ray tracing (graphics)|ray-tracing]] cores (streaming multiprocessors) (graphics processing clusters)}}
}}
=== RTX 40 series ===
{{Further|GeForce 40 series|Ada Lovelace (microarchitecture)}}
*Supported [[Application programming interface|APIs]]: [[Direct3D]] 12 Ultimate (12_2), [[OpenGL]] 4.6, [[OpenCL]] 3.0, [[Vulkan (API)|Vulkan]] 1.3 and [[CUDA]] 8.9<ref>{{Cite web |title=CUDA C++ Programming Guide |url=https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html |access-date=2022-09-20 |website=docs.nvidia.com |language=en-us |archive-date=3 May 2021 |archive-url=https://web.archive.org/web/20210503160950/https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html |url-status=live}}</ref>
*Supported display connections: [[HDMI]] 2.1, [[DisplayPort]] 1.4a
*[[Tensor Core|Tensor core]] 4th gen
*[[RT core]] 3rd gen
*NVIDIA [[DLSS|DLSS 3]]
*NVIDIA [[DLSS|DLSS 3.5]]
*[https://developer.nvidia.com/blog/improve-shader-performance-and-in-game-frame-rates-with-shader-execution-reordering/ Shader Execution Reordering]
*Dual [[NVENC]] with 8K 10-bit 60FPS [[AV1]] fixed function hardware encoding<ref>{{Cite web |title=Creativity At The Speed of Light: GeForce RTX 40 Series Graphics Cards Unleash Up To 2X Performance in 3D Rendering, AI, and Video Exports For Gamers and Creators |url=https://www.nvidia.com/en-us/geforce/news/rtx-40-series-and-studio-updates-for-content-creation/ |website=NVIDIA |access-date=22 May 2023 |archive-date=1 June 2023 |archive-url=https://web.archive.org/web/20230601124541/https://www.nvidia.com/en-us/geforce/news/rtx-40-series-and-studio-updates-for-content-creation/ |url-status=live}}</ref><ref>{{cite web |title=Nvidia Video Codec SDK |url=https://developer.nvidia.com/nvidia-video-codec-sdk |date=23 August 2013 |access-date=22 May 2023 |archive-date=17 March 2023 |archive-url=https://web.archive.org/web/20230317082326/https://developer.nvidia.com/nvidia-video-codec-sdk |url-status=live}}</ref>
*Opacity Micro-Maps (OMM)
*Displacement Micro-Meshes (DMM)
*No NVLink support, Multi-GPU over PCIe 5.0<ref>{{Cite web |author1=Chuong Nguyen |date=2022-09-21 |title=Nvidia kills off NVLink on RTX 4090 |url=https://www.windowscentral.com/hardware/computers-desktops/nvidia-kills-off-nvlink-on-rtx-4090 |access-date=2023-01-01 |website=Windows Central |language=en |archive-date=24 April 2024 |archive-url=https://web.archive.org/web/20240424172523/https://www.windowscentral.com/hardware/computers-desktops/nvidia-kills-off-nvlink-on-rtx-4090 |url-status=live}}</ref><ref>{{Cite news |title=Jensen Confirms: NVLink Support in Ada Lovelace is Gone |url=https://www.techpowerup.com/299107/jensen-confirms-nvlink-support-in-ada-lovelace-is-gone |work=TechPowerUp |language=en-US |date=September 21, 2022 |access-date=November 21, 2022 |archive-date=18 October 2022 |archive-url=https://web.archive.org/web/20221018020318/https://www.techpowerup.com/299107/jensen-confirms-nvlink-support-in-ada-lovelace-is-gone |url-status=live}}</ref>
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model{{spaces}}name
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Process
! rowspan="2" | Transistors (billion)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Bus (computing)|Bus]] [[Input/output|interface]]
! colspan="3" | Clock speeds
! rowspan="2" | Core config {{efn|name="CoreConfig"}}
! rowspan="2" | [[GPU cache|L2 Cache]] ([[Mebibyte|MiB]])
! colspan="4" | Memory
! colspan="2" | [[Fillrate]]
! colspan="4" | Processing power ([[FLOPS|TFLOPS]])
! rowspan="2" | [[Ray tracing (graphics)|Ray-tracing]] Performance ([[TFLOPS]])
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! colspan="2" | Release price (USD)
|-
! Base core clock ([[Hertz|MHz]])
! Boost core clock ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! [[Half-precision floating-point format|Half precision]]
! [[Single-precision floating-point format|Single precision]]
! [[Double-precision floating-point format|Double precision]]
! [[Tensor]] compute (FP16) (2:1 sparse)
! MSRP
! Founders Edition
|-
! style="text-align:left;" | GeForce RTX 4060<ref name=":0">{{Cite web |title=NVIDIA GeForce RTX 4060 Ti & 4060 Graphics Cards |url=https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/rtx-4060-4060ti/ |access-date=2023-05-18 |website=NVIDIA |language=en-us |archive-date=8 May 2024 |archive-url=https://web.archive.org/web/20240508225711/https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/rtx-4060-4060ti/ |url-status=live}}</ref>
| June 29, 2023
| AD107-400
| rowspan="12" |[[TSMC]] [[5 nm process|4N]]<ref>{{Cite web |title=Nvidia's Ada Lovelace GPU generation: $1,599 for RTX 4090, $899 and up for 4080 |url=https://arstechnica.com/gadgets/2022/09/nvidias-ada-lovelace-gpu-generation-1599-for-rtx-4090-899-and-up-for-4080/ |access-date=2023-07-27 |website=Ars Technica |date=20 September 2022 |language=en-us |archive-date=3 April 2023 |archive-url=https://web.archive.org/web/20230403183856/https://arstechnica.com/gadgets/2022/09/nvidias-ada-lovelace-gpu-generation-1599-for-rtx-4090-899-and-up-for-4080/ |url-status=live}}</ref>
| 18.9
| 158.7
| rowspan="3" |PCIe 4.0 x8
| 1830
| 2460
| 17
| 3072:96 :48:96:24<br />(24)(3)
| 24
| rowspan="2" |8
| 272
| rowspan="3" |[[GDDR6 SDRAM#GDDR6|GDDR6]]
| rowspan="3" |128
| 118.1
| 236.2
| 11.2<br />15.1
| 11.2<br />15.1
| 0.176<br />0.236
| 60 (121)
| 35
| 115
| $299
| {{N/A}}
|-
! rowspan="2" style="text-align:left;" |GeForce RTX 4060 Ti<ref name=":0" />
| May 24, 2023
| AD106-350
| rowspan="2" |22.9
| rowspan="2" |187.8
| rowspan="2" |2310
| rowspan="2" |2540
| rowspan="2" |18
| rowspan="2" |4352:136 :48:136:34<br />(34)(3)
| rowspan="2" |32
| rowspan="2" |288
| rowspan="2" |121.9
| rowspan="2" |345.4
| rowspan="2" |20.1<br />22.1
| rowspan="2" |20.1<br />22.1
| rowspan="2" |0.314<br />0.345
| rowspan="2" |88 (177)
| rowspan="2" |51
| rowspan="2" |160
| colspan="2" | $399
|-
| July 18, 2023
| AD106-351
| 16
| $499
| {{N/A}}
|-
! style="text-align:left;" | GeForce RTX 4070<ref>{{Cite web |title=NVIDIA GeForce RTX 4070 Family |url=https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/rtx-4070-4070ti/ |access-date=2023-04-13 |website=NVIDIA |language=en-us |archive-date=27 December 2023 |archive-url=https://web.archive.org/web/20231227205426/https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/rtx-4070-4070ti/ |url-status=live}}</ref>
| April 13, 2023
| AD104-250
| rowspan="3" |35.8
| rowspan="3" |294.5
| rowspan="9" |PCIe 4.0 x16
| 1920
| rowspan="2" |2475
| rowspan="5" |21
| 5888:184 :64:184:46<br />(46)(4)
| 36
| rowspan="3" |12
| rowspan="3" |504
| rowspan="9" |[[GDDR6 SDRAM#GDDR6X|GDDR6X]]
| rowspan="3" |192
| 158.4
| 455.4
| 22.6<br />29.1
| 22.6<br />29.1
| 0.353<br />0.455
| 117 (233)
| 67
| 200
| colspan="2" rowspan="2" |$599
|-
! style="text-align:left;" | GeForce RTX 4070 Super<ref>{{Cite web |date=2024-01-09 |title=NVIDIA GeForce RTX 4070 SUPER Specs |url=https://www.techpowerup.com/gpu-specs/geforce-rtx-4070-super.c4186 |access-date=2024-01-09 |website=TechPowerUp |language=en}}</ref><ref name=":14">{{Cite web |title=NVIDIA GeForce RTX 4070 Family Graphics Cards |url=https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/rtx-4070-family/ |access-date=2024-01-09 |website=NVIDIA |language=en-us |archive-date=9 January 2024 |archive-url=https://web.archive.org/web/20240109004123/https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/rtx-4070-family/ |url-status=live}}</ref>
| January 17, 2024
| AD104-350
| 1980
| 7168:224 :80:224:56<br />(56)(5)
| rowspan="4" |48
| 198.0
| 554.4
| 28.39<br />35.48
| 28.39<br />35.48
| 0.444<br />0.554
| 114 (227)<br />142 (284)
| 82
| 220
|-
! style="text-align:left;" | GeForce RTX 4070 Ti<ref>{{cite news |title=NVIDIA "accidentally" confirms GeForce RTX 4070 Ti GPU specifications |url=https://videocardz.com/newz/nvidia-accidentally-confirms-geforce-rtx-4070-ti-gpu-specifications |access-date=2022-12-31 |agency=VideoCardz |date=2022-12-30}}</ref>
| January 5, 2023
| AD104-400
| 2310
| rowspan="3" |2610
| 7680:240 :80:240:60<br />(60)(5)
| 208.8
| 626.4
| 35.5<br />40.1
| 35.5<br />40.1
| 0.554<br />0.627
| 142 (284)<br />160 (321)
| 92.7
| rowspan="3" |285
| rowspan="2" | $799
| rowspan="2" {{N/A}}
|-
! style="text-align:left;" | GeForce RTX 4070 Ti Super<ref name=":14" /><ref>{{Cite web |date=2024-01-09 |title=NVIDIA GeForce RTX 4070 Ti SUPER Specs |url=https://www.techpowerup.com/gpu-specs/geforce-rtx-4070-ti-super.c4187 |access-date=2024-01-09 |website=TechPowerUp |language=en}}</ref>
| January 24, 2024
| AD103-275
| 45.9
| 378.6
| 2340
| 8448:264 :96:264:66<br />(66) (6)
| 16
| 672
| 256
| 292.3
| 689.0
| 39.54<br />44.10
| 39.54<br />44.10
| 0.618<br />0.689
| 158 (316)<br />176 (353)
| 102
|-
! rowspan="2" style="text-align:left;" |GeForce RTX 4080<ref>{{Cite web |title=NVIDIA GeForce RTX 4080 Graphics Cards |url=https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/rtx-4080/ |access-date=2022-09-20 |website=NVIDIA |language=en-us |archive-date=21 September 2022 |archive-url=https://web.archive.org/web/20220921085622/https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/rtx-4080/ |url-status=live}}</ref>
| {{unreleased|Unlaunched}}<ref>{{Cite web |last=Warren |first=Tom |date=2022-10-14 |title=Nvidia says it's "unlaunching" the 12GB RTX 4080 after backlash |url=https://www.theverge.com/2022/10/14/23404595/nvidia-rtx-408-12gb-unlaunch |access-date=2022-10-14 |website=The Verge |language=en-US |archive-date=14 October 2022 |archive-url=https://web.archive.org/web/20221014173346/https://www.theverge.com/2022/10/14/23404595/nvidia-rtx-408-12gb-unlaunch |url-status=live}}</ref><ref>{{Cite web |title=Unlaunching The 12GB 4080 |url=https://www.nvidia.com/en-us/geforce/news/12gb-4080-unlaunch/ |access-date=2022-10-14 |website=NVIDIA |language=en-us |archive-date=14 October 2022 |archive-url=https://web.archive.org/web/20221014175115/https://www.nvidia.com/en-us/geforce/news/12gb-4080-unlaunch/ |url-status=live}}</ref>
| AD104-400
| 35.8
| 294.5
| 2310
| 7680:240 :80:240:60<br />(60) (5)
| 12
| 504
| 192
| 208.8
| 626.4
| 35.5<br />40.1
| 35.5<br />40.1
| 0.554<br />0.627
| 142 (284)<br />160 (321)
| 92.7
| $899
|
|-
| November 16, 2022
| AD103-300
| rowspan="2" | 45.9
| rowspan="2" | 378.6
| 2210
| 2505
| 22.4
| 9728:304 :112:304:76<br />(76)(7)
| rowspan="2" | 64
| rowspan="2" | 16
| 717
| rowspan="2" | 256
| 280.6
| 761.5
| 43.0<br />48.8
| 43.0<br />48.7
| 0.672<br />0.761
| 172 (344)<br />195 (390)
| 112.7
| rowspan="2" | 320
| colspan="2" | $1199
|-
! style="text-align:left;" | GeForce RTX 4080 Super<ref>{{Cite web |date=2024-01-09 |title=NVIDIA GeForce RTX 4080 SUPER Specs |url=https://www.techpowerup.com/gpu-specs/geforce-rtx-4080-super.c4182 |access-date=2024-01-09 |website=TechPowerUp |language=en}}</ref><ref>{{Cite web |title=NVIDIA GeForce RTX 4080 Family Graphics Cards |url=https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/rtx-4080-family/ |access-date=2024-01-09 |website=NVIDIA |language=en-us}}</ref>
| January 31, 2024
| AD103-400
| 2295
| 2550
| 23
| 10240:320 :112:320:80<br />(80) (7)
| 736
| 285.6
| 816.0
| 47.0<br />52.22
| 47.0<br />52.22
| 0.734<br />0.816
| 188 (376)<br />209 (418)
| 121
| colspan="2" | $999
|-
! style="text-align:left;" | GeForce RTX 4090 D<ref>{{Cite web |title=GeForce RTX 4090 D 显卡 |url=https://www.nvidia.cn/geforce/graphics-cards/40-series/rtx-4090-d/ |access-date=2023-12-29 |website=NVIDIA |language=zh-cn |archive-date=29 December 2023 |archive-url=https://web.archive.org/web/20231229021947/https://www.nvidia.cn/geforce/graphics-cards/40-series/rtx-4090-d/ |url-status=live}}</ref><ref>{{Cite web |date=2023-12-29 |title=NVIDIA GeForce RTX 4090D Specs |url=https://www.techpowerup.com/gpu-specs/geforce-rtx-4090d.c4189 |access-date=2023-12-29 |website=TechPowerUp |language=en}}</ref>
| December 28, 2023
| AD102-250
| rowspan="2" |76.3
| rowspan="2" |608.5
| 2280
| rowspan="2" |2520
| rowspan="2" |21
| 14592:456 :176:456:114<br />(114) (11)
| rowspan="2" |72
| rowspan="2" |24
| rowspan="2" |1008
| rowspan="2" |384
| rowspan="2" |443.5
| 1149.1
| 66.5<br />73.5
| 66.5<br />73.5
| 1.040<br />1.149
| 266 (532)<br />294 (588)
| 170
| 425
| [[Renminbi|¥]]12,999
|
|-
! style="text-align:left;" |GeForce RTX 4090<ref>{{Cite web |title=NVIDIA GeForce RTX 4090 Graphics Cards |url=https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/rtx-4090/ |access-date=2022-09-20 |website=NVIDIA |language=en-us |archive-date=8 March 2023 |archive-url=https://web.archive.org/web/20230308035725/https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/rtx-4090/ |url-status=live}}</ref><ref>{{Cite web|title=NVIDIA Ada GPU Architecture|url=https://images.nvidia.com/aem-dam/Solutions/geforce/ada/nvidia-ada-gpu-architecture.pdf|access-date=October 8, 2022|archive-date=4 July 2023|archive-url=https://web.archive.org/web/20230704143803/https://images.nvidia.com/aem-dam/Solutions/geforce/ada/nvidia-ada-gpu-architecture.pdf|url-status=live}}</ref>
| October 12, 2022
| AD102-300
| 2235
| 16384:512 :176:512:128<br />(128) (11)
| 1290.2
| 73.1<br />82.6
| 73.1<br />82.6
| 1.142<br />1.291
| 292 (585)<br />330 (661)
| 191
| 450
| colspan="2" | $1599
|}
{{notelist|refs=
{{efn|name="CoreConfig"|Main [[Unified shader model|shader processors]] : [[texture mapping unit]] : [[render output unit]]s : [[tensor core]]s : [[Ray tracing (graphics)|ray-tracing]] cores (streaming multiprocessors) (graphics processing clusters)}}
}}
=== RTX 50 series ===
{{Further|GeForce 50 series|Blackwell (microarchitecture)}}
*GeForce RTX 50 series desktop GPUs are the first consumer GPUs to utilize a [[PCI Express#PCI Express 5.0|PCIe 5.0]] interface and [[GDDR7 SDRAM|GDDR7]] video memory.
*Supported [[Application programming interface|APIs]]: [[Direct3D]] 12.2, [[OpenGL]] 4.6, [[OpenCL]] 3.0, [[Vulkan (API)|Vulkan]] 1.4 and [[CUDA]] 12.x
*Supported display connections: [[HDMI]] 2.1b, [[DisplayPort]] 2.1b
*9th gen [[NVENC]] (3x/2x/1x) / 6th gen [[NVDEC]] (2x/1x)
*NVIDIA [[Deep Learning Super Sampling|DLSS 4]] (Multi Frame Generation support)
*AI Management Processor ([https://www.techpowerup.com/review/nvidia-geforce-rtx-50-technical-deep-dive/3.html#:~:text=address%20this%2C%20the-,AI%20Management%20Processor%20(AMP),-was%20introduced%20as AMP])
*[https://www.nvidia.com/en-us/geforce/news/reflex-2-even-lower-latency-gameplay-with-frame-warp/ Reflex] 2 optimized
*[[Tensor Core|Tensor core]] 5th gen (INT4/FP4 capabilities and second generation FP8 Transformer Engine)
*[[Ray-tracing hardware|RT core]] 4th gen
*[[Unified shader model|Shader processors]], [[Ray-tracing hardware|RT cores]] and [[Tensor Cores|Tensor cores]] optimized for RTX [https://developer.nvidia.com/blog/nvidia-rtx-neural-rendering-introduces-next-era-of-ai-powered-graphics-innovation/ Neural Shaders] and new neural workloads
*[https://github.com/NVIDIA-RTX/RTXMG Mega Geometry] Technology optimized (Shader processors and RT cores)
*[https://developer.nvidia.com/blog/improve-shader-performance-and-in-game-frame-rates-with-shader-execution-reordering/ Shader Execution Reordering] (SER) 2.0
*[https://developer.nvidia.com/blog/render-path-traced-hair-in-real-time-with-nvidia-geforce-rtx-50-series-gpus/ Linear Swept Spheres] (LSS)
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
! rowspan="2" | Model{{spaces}}name
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Process
! rowspan="2" | Transistors (billion)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Bus (computing)|Bus]] [[Input/output|interface]]
! colspan="3" | Clock speeds
! rowspan="2" | Core config {{efn|name="CoreConfig"}}
! rowspan="2" | [[GPU cache|L2 Cache]] ([[Mebibyte|MiB]])
! colspan="4" | Memory
! colspan="2" | [[Fillrate]]
! colspan="5" |Processing power ([[FLOPS|TFLOPS]])
! rowspan="2" | [[Ray tracing (graphics)|Ray-tracing]] Performance ([[TFLOPS]])
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! colspan="2" | Release price (USD)
|-
! Base core clock ([[Hertz|MHz]])
! Boost core clock ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! [[Half-precision floating-point format|Half precision]]
! [[Single-precision floating-point format|Single precision]]
! [[Double-precision floating-point format|Double precision]]
! [[Tensor]] compute FP16 (2:1 sparse)
! [[Tensor]] compute FP4 (2:1 sparse)
! MSRP
! Founders Edition
|-
! style="text-align:left;" | GeForce RTX 5050
| July 1, 2025
| GB207-300
| rowspan="8" |[[TSMC]] [[5 nm process|4N]]
| 16.9
| 149 mm<sup>2</sup>
| rowspan="4" |[[PCI Express#PCI Express 5.0|PCIe 5.0]] x8
| 2317
| 2572
| 20
| 2560:80 :32:80:20<br />(20) (2)
| rowspan="4" |32
| rowspan="3" |8
| 320
| [[GDDR6 SDRAM|GDDR6]]
| rowspan="4" |128
| 82.3
| 205.8
| 13.17
| 13.17
| 0.206
| 52 (105)
| 210 (421)
| 40
| 130 W
| $249
| rowspan="4" |N/A
|-
! style="text-align:left;" | GeForce RTX 5060
| May 19, 2025
| GB206-250
| rowspan="3" |21.9
| rowspan="3" |181 mm<sup>2</sup>
| 2280
| 2497
| rowspan="5" |28
| 3840:120 :48:120:30<br />(30) (3)
| rowspan="3" |448
| rowspan="7" |[[GDDR7 SDRAM|GDDR7]]
| 119.9
| 299.6
| 19.18
| 19.18
| 0.300
| 77 (154)
| 307 (614)
| 58
| 145 W
| $299
|-
! rowspan="2" style="text-align:left;" |GeForce RTX 5060 Ti
| rowspan="2" |April 16, 2025
| rowspan="2" |GB206-300
| rowspan="2" |2407
| rowspan="2" |2572
| rowspan="2" |4608:144 :48:144:36<br />(36) (3)
| rowspan="2" |123.5
| rowspan="2" |370.4
| rowspan="2" |23.7
| rowspan="2" |23.7
| rowspan="2" |0.370
| rowspan="2" |95 (190)
| rowspan="2" |380 (760)
| rowspan="2" |72
| rowspan="2" |180 W
| $379
|-
| 16
| $429
|-
! style="text-align:left;" | GeForce RTX 5070
| {{dts|2025|March|5|format=mdy|abbr=on}}
| GB205-300
| 31.1
| 263 mm<sup>2</sup>
| rowspan="4" |[[PCI Express#PCI Express 5.0|PCIe 5.0]] x16
| 2160
| 2512
| 6144:192 :80:192:48<br />(48) (5)
| 48
| 12
| 672
| 192
| 201
| 483.8
| 30.87
| 30.87
| 0.4838
| 123 (247)
| 494 (988)
| 94
| 250{{nbsp}}W
| colspan="2" |$549
|-
! style="text-align:left;" | GeForce RTX 5070 Ti
| {{dts|2025|February|20|format=mdy|abbr=on}}
| GB203-300
| rowspan="2" |45.6
| rowspan="2" | 378 mm<sup>2</sup>
| 2300
| 2452
| 8960:280 :96:280:70<br />(70) (6)
| rowspan="2" |64
| rowspan="2" | 16
| 896
| rowspan="2" | 256
| 235.4
| 693.0
| 44.35
| 44.35
| 0.693
| 176 (352)
| 703 (1406)
| 133
| 300{{nbsp}}W
| $749
| N/A
|-
! style="text-align:left;" | GeForce RTX 5080<ref>{{Cite web |title=NVIDIA GeForce RTX 5080 Graphics Cards |url=https://www.nvidia.com/en-us/geforce/graphics-cards/50-series/rtx-5080/ |access-date=2025-01-14 |website=NVIDIA |language=en-us}}</ref>
| rowspan="2" | {{dts|2025|January|30|format=mdy|abbr=on}}
| GB203-400
| 2300
| 2617
| 30
| 10752:336 :112:336:84<br />(84) (7)
| 960
| 293.1
| 879.3
| 56.28
| 56.28
| 0.8793
| 225 (450)
| 900 (1801)
| 171
| 360{{nbsp}}W
| colspan="2" |$999
|-
! style="text-align:left;" | GeForce RTX 5090<ref>{{Cite web |title=NVIDIA GeForce RTX 5090 Graphics Cards |url=https://www.nvidia.com/en-us/geforce/graphics-cards/50-series/rtx-5090/ |access-date=2025-01-14 |website=NVIDIA |language=en-us}}</ref>
| GB202-300
| 92.2
| 750 mm<sup>2</sup>
| 2010
| 2407
| 28
| 21760:680 :176:680:170<br />(170) (11)
| 96
| 32
| 1,792
| 512
| 423.6
| 1636.8
| 104.77
| 104.77
| 1.637
| 419 (838)
| 1676 (3352)
| 317.5
| 575{{nbsp}}W
| colspan="2" |$1,999
|}
{{notelist|refs=
{{efn|name="CoreConfig"|Main [[Unified shader model|shader processors]] : [[texture mapping unit]] : [[render output unit]]s : [[tensor core]]s : [[Ray tracing (graphics)|ray-tracing]] cores (streaming multiprocessors) (graphics processing clusters)}}
}}
==Mobile GPUs==
Mobile GPUs are either soldered to the mainboard or to some [[Mobile PCI Express Module]] (MXM).
===GeForce2 Go series===
* All models are manufactured with a 180 nm manufacturing process
* All models support [[Direct3D]] 7.0 and [[OpenGL]] 1.2
*[[Celsius (microarchitecture)]]
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
!rowspan=2|Model
!rowspan=2|Launch
!rowspan=2|[[Code name]]
!rowspan=2|[[Computer bus|Bus]] [[I/O interface|interface]]
!rowspan=2|Core clock ([[Hertz|MHz]])
!rowspan=2|Memory clock ([[Hertz|MHz]])
!rowspan=2|Core config{{efn|name=GF2GCoreConfig}}
!colspan=4|Memory
! colspan="4" |[[Fillrate]]
|-
!Size ([[Mebibyte|MiB]])
!Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
!Bus type
!Bus width ([[bit]])
!MOperations/s
!MPixels/s
!MTexels/s
!MVertices/s
|-
!style="text-align:left"|GeForce2 Go 100
|February 6, 2001
| rowspan="3" |NV11M
| rowspan="3" |AGP 4x
|125
|332
| rowspan="3" |2:0:4:2
|8, 16
|1.328
|DDR
|32
|250
|250
|500
| rowspan="3" |0
|-
!style="text-align:left"|GeForce2 Go
|November 11, 2000
| rowspan="2" |143
|166<br />332
| rowspan="2" |16, 32
| rowspan="2" |2.656
|SDR<br />DDR
|128<br />64
| rowspan="2" |286
| rowspan="2" |286
| rowspan="2" |572
|-
!style="text-align:left"|GeForce2 Go 200
|February 6, 2001
|332
|DDR
|64
|}
{{notelist|refs=
{{efn|name=GF2GCoreConfig|[[Pixel shader]]s: [[vertex shader]]s: [[texture mapping unit]]s: [[render output unit]]s}}
}}
===GeForce4 Go series===
* All models are made via 150 nm fabrication process
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
!rowspan=2|Model
!rowspan=2|Launch
!rowspan=2|[[Code name]]
!rowspan=2|[[Computer bus|Bus]] [[I/O interface|interface]]
!rowspan=2|Core clock ([[Hertz|MHz]])
!rowspan=2|Memory clock ([[Hertz|MHz]])
!rowspan=2|Core config{{efn|name=GF2GCoreConfig}}
!colspan=4|Memory
! colspan="4" |[[Fillrate]]
! colspan="2" |[[Application programming interface|API]] support
|-
!Size ([[Mebibyte|MiB]])
!Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
!Bus type
!Bus width ([[bit]])
!MOperations/s
!MPixels/s
!MTexels/s
!MVertices/s
![[Direct3D]]
![[OpenGL]]
|-
!style="text-align:left"|GeForce4 Go 410
| rowspan="3" |February 6, 2002
| rowspan="4" |NV17M
| rowspan="6" |AGP 8x
| rowspan="2" |200
|200
| rowspan="5" |2:0:4:2
|16
|1.6
|SDR
| rowspan="2" |64
| rowspan="2" |400
| rowspan="2" |400
| rowspan="2" |800
| rowspan="5" |0
| rowspan="5" |7.0
| rowspan="5" |1.2
|-
!style="text-align:left"|GeForce4 Go 420
|400
|32
|3.2
| rowspan="5" |DDR
|-
!style="text-align:left"|GeForce4 Go 440
|220
|440
| rowspan="4" |64
|7.04
| rowspan="4" |128
|440
|440
|880
|-
!style="text-align:left"|GeForce4 Go 460
|October 14, 2002
|250
|500
|8
|500
|500
|1000
|-
!style="text-align:left"|GeForce4 Go 488
|
|NV18M
|300
|550
|8.8
|600
|600
|1200
|-
!style="text-align:left"|GeForce4 Go 4200
|November 14, 2002
|NV28M
|200
|400
|4:2:8:4
|6.4
|800
|800
|1600
|100
|8.1
|1.3
|}
{{notelist|refs=
{{efn|name=GF2GCoreConfig|[[Pixel shader]]s: [[vertex shader]]s: [[texture mapping unit]]s: [[render output unit]]s}}
}}
===GeForce FX Go 5 (Go 5xxx) series===
The GeForce FX Go 5 series for notebooks architecture.
* <sup>1</sup> [[Vertex shader]]s: [[pixel shader]]s: [[texture mapping unit]]s: [[render output unit]]s
* <sup>*</sup> The GeForce FX series runs vertex shaders in an array
* <sup>**</sup> GeForce FX series has limited OpenGL 2.1 support(with the last Windows XP driver released for it, 175.19).
*[[Rankine (microarchitecture)]]
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=3 | Model
! rowspan=3 | Launch
! rowspan=3 | [[Code name]]
! rowspan=3 | Fab ([[Nanometer|nm]])
! rowspan=3 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=3 | Core clock ([[Hertz|MHz]])
! rowspan=3 | Memory clock ([[Hertz|MHz]])
! rowspan=3 | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="3" | Supported [[Application programming interface|API]] version
! rowspan=3 | [[Thermal design power|TDP]] (Watts)
|-
! rowspan=2 | Size ([[Mebibyte|MiB]])
! rowspan=2 | Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! rowspan=2 | Bus type
! rowspan=2 | Bus width ([[bit]])
! rowspan="2" |Pixel ([[Pixel|GP]]/s)
! rowspan="2" |Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! colspan="2" | [[OpenGL]]
|-
!
! Hardware
! Drivers (Software)
|-
! style="text-align:left;" | GeForce FX Go 5100<sup>*</sup>
| rowspan="4" | March 2003
| rowspan="2" | NV34M
| rowspan="2" | 150
| rowspan="5" | AGP 8x
| 200
| 400
| rowspan="4" | 4:2:4:4
| 64
| 3.2
| rowspan="5" | DDR
| 64
| 0.8
| 0.8
| rowspan="5" | 9.0
| rowspan="5" | 1.5
| rowspan="5" | 2.1**
| {{unk}}
|-
! style="text-align:left;" | GeForce FX Go 5500<sup>*</sup>
| 300
| rowspan="3" | 600
| 32<br />64
| rowspan="3" | 9.6
| rowspan="4" | 128
|1.2
|1.2
| {{unk}}
|-
! style="text-align:left;" | GeForce FX Go 5600<sup>*</sup>
| rowspan="2" | NV31M
| rowspan="3" | 130
| 350
| rowspan="3" | 32
| rowspan="2" |1.4
| rowspan="2" |1.4
| {{unk}}
|-
! style="text-align:left;" | GeForce FX Go 5650<sup>*</sup>
| 350
| {{unk}}
|-
! style="text-align:left;" | GeForce FX Go 5700<sup>*</sup>
| February 1, 2005
| NV36M
| 450
| 550
| 4:3:4:4
| 8.8
|1.8
|1.8
| {{unk}}
|}
===GeForce Go 6 (Go 6xxx) series===
* All models support [[Direct3D]] 9.0c and [[OpenGL]] 2.1
*[[Curie (microarchitecture)]]
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
!rowspan=2|Model
!rowspan=2|Launch
!rowspan=2|[[Code name]]
!rowspan=2|Fab ([[nanometer|nm]])
!rowspan=2|[[Computer bus|Bus]] [[I/O interface|interface]]
!rowspan=2|Core clock ([[Hertz|MHz]])
!rowspan=2|Memory clock ([[Hertz|MHz]])
!rowspan=2|Core config<sup>1</sup>
!colspan=4|Memory
! colspan="4" |[[Fillrate]]
|-
!Size ([[Mebibyte|MiB]])
!Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
!Bus type
!Bus width ([[bit]])
!MOperations/s
!MPixels/s
!MTexels/s
!MVertices/s
|-
!style="text-align:left"|GeForce Go 6100 + nForce Go 430
|{{unk}}
| rowspan="2" |C51M
| rowspan="5" |110
| rowspan="2" |HyperTransport
| rowspan="2" |425
| rowspan="2" |System memory
| rowspan="2" |2:1:2:1
| rowspan="2" |Up to 128 MiB system
| rowspan="2" |System memory
| rowspan="2" |DDR2
| rowspan="2" |64/128
| rowspan="2" |850
| rowspan="2" |425
| rowspan="2" |850
| rowspan="2" |106.25
|-
!style="text-align:left"|GeForce Go 6150 + nForce Go 430
|February 1, 2006
|-
!style="text-align:left"|GeForce Go 6200
|February 1, 2006
| rowspan="2" |NV44M
| rowspan="5" |PCIe x16
|300
|600
| rowspan="2" |4:3:4:2
| rowspan="2" |16
|2.4
| rowspan="3" |DDR
|32
|1200
|600
|1200
|225
|-
!style="text-align:left"|GeForce Go 6400
|February 1, 2006
|400
| rowspan="2" |700
|5.6
|64
|1600
|800
|1600
|250
|-
!style="text-align:left"|GeForce Go 6600
|September 29, 2005
|NV43M
| rowspan="2" |300
|8:3:8:4
| rowspan="2" |128
|11.2
|128
| rowspan="2" |3000
| rowspan="2" |1500
| rowspan="2" |3000
|281.25
|-
!style="text-align:left"|GeForce Go 6800
|November 8, 2004
| rowspan="2" |NV41M
| rowspan="2" |130
| rowspan="2" |700<br />1100
| rowspan="2" |12:5:12:12
| rowspan="2" |22.4<br />35.2
| rowspan="2" |DDR, DDR2<br />DDR3
| rowspan="2" |256
|375
|-
!style="text-align:left"|GeForce Go 6800 Ultra
|February 24, 2005
|450
|256
|5400
|3600
|5400
|562.5
|}
* <sup>1</sup> [[Pixel shader]]s: [[vertex shader]]s: [[texture mapping unit]]s: [[render output unit]]s
===GeForce Go 7 (Go 7xxx) series===
The GeForce Go 7 series for notebooks architecture.
* <sup>1</sup> [[Vertex shader]]s: [[pixel shader]]s: [[texture mapping unit]]s: [[render output unit]]s
* <sup>2</sup> Graphics card supports [[TurboCache]], memory size entries in bold indicate total memory (graphics + system RAM), otherwise entries are graphics RAM only
*[[Curie (microarchitecture)]]
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan=2 | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Features
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! [[OpenGL]]
|-
!style="text-align:left;" | GeForce 7000M
| rowspan="2" | February 1, 2006
| MCP67MV
| rowspan="7" | 90
| rowspan="2" | [[HyperTransport|Hyper Transport]]
| 350
| rowspan="2" | System memory
| rowspan="2" | 1:2:2:2
| rowspan="2" | Up to 256 from system memory
| rowspan="2" | System memory
| rowspan="2" | DDR2
| rowspan="2" | 64/128
| 0.7
| 0.7
| rowspan="13" | 9.0c
| rowspan="13" | 2.1
| {{unk}}
| rowspan="2" |
|-
!style="text-align:left;" | GeForce 7150M
| MCP67M
| 425
| 0.85
| 0.85
| {{unk}}
|-
! style="text-align:left;" | GeForce Go 7200<sup>2</sup>
| rowspan="3" | January 2006
| rowspan="3" | G72M
| rowspan="11" | PCIe x16
| 450
| rowspan="2" | 700
| 3:4:4:1
| 64
| 2.8
| rowspan="11" | GDDR3
| 32
| 0.45
| 1.8
| {{unk}}
| rowspan="3" | Transparency Anti-Aliasing
|-
! style="text-align:left;" | GeForce Go 7300<sup>2</sup>
| 350
| rowspan="2" | 3:4:4:2
| 128, 256, '''512'''
| 5.60
| rowspan="2" | 64
| 0.7
| 1.4
| {{unk}}
|-
! style="text-align:left;" | GeForce Go 7400<sup>2</sup>
| rowspan="2" | 450
| 900
| 64, '''256'''
| 7.20
| 0.9
| 1.8
| {{unk}}
|-
! style="text-align:left;" | GeForce Go 7600
| March 2006
| rowspan="2" | G73M
| 1000
| 5:8:8:8
| 256, 512
| 16
| rowspan="3" | 128
| 3.6
| 3.6
| {{unk}}
| rowspan="8" | Scalable Link Interface (SLI), Transparency Anti-Aliasing
|-
! style="text-align:left;" | GeForce Go 7600 GT
| rowspan="2" | 2006
| 500
| 1200
| rowspan="2" | 5:12:12:8
|
| 19.2
|
|
|
|-
! style="text-align:left;" | GeForce Go 7700
| G73-N-B1
| 80
| 450
| 1000
| 512
|
|
|
|
|-
! style="text-align:left;" | GeForce Go 7800
| March 3, 2006
| rowspan="2" | G70M
| rowspan="2" | 110
| rowspan="2" | 400
| rowspan="2" | 1100
| 6:16:16:8
| rowspan="3" | 256
| rowspan="2" | 35.2
| rowspan="5" | 256
|3.2
|
|
|-
! style="text-align:left;" | GeForce Go 7800 GTX
|
| 8:24:24:16
|6.4
|
|
|-
! style="text-align:left;" | GeForce Go 7900 GS
|
| rowspan="3" | G71M
| rowspan="3" | 90
| 375
| 1000
| 7:20:20:16
| 32.0
|6
|7.5
|
|-
! style="text-align:left;" | GeForce Go 7900 GTX
| 500
| 1200
| rowspan="2" | 8:24:24:16
|
| 38.4
|8
|12
| rowspan="2" | 45
|-
! style="text-align:left;" | GeForce Go 7950 GTX
|
| 575
| 1400
| 512
| 44.8
|9.2
|13.8
|}
===GeForce 8M (8xxxM) series===
The GeForce 8M series for notebooks architecture [[Tesla (microarchitecture)|Tesla]].
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speed
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! rowspan="2" | Processing power ([[GFLOPS]])
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! [[OpenGL]]
|-
! style="text-align:left;" | GeForce 8200M G<ref>{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-8200m-g-mgpu/specifications |title=GeForce 8200M G mGPU. Specifications |website=Geforce.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222115936/http://www.geforce.com/hardware/notebook-gpus/geforce-8200m-g-mgpu/specifications |archive-date=2015-12-22 |url-status=live }}</ref>
| June 2008
| MCP77MV, MCP79MVL
| rowspan="7" | 80
| Integrated (PCIe 2.0 x16)
| rowspan="3" | 400
| rowspan="3" | 800
| 800<br />1066<br />(system memory)
| rowspan="2" | 8:8:4
| Up to 256 from system memory
| 12.8<br />17.056
| DDR2<br />DDR3
| 128
| rowspan="3" |1.6
| rowspan="3" |3.2
| 19.2
| rowspan="9" | 10.0
| rowspan="9" | 3.3
| {{unk}}
| [[PureVideo]] HD with VP3, Full H.264 / VC-1 / MPEG-2 HW Decode
|-
! style="text-align:left;" | GeForce 8400M G
| rowspan="5" | May 2007
| rowspan="3" | NB8M(G86)
| rowspan="6" | PCIe x16
| rowspan="2" | 800
| rowspan="2" | 128 / 256
| rowspan="2" | 6.4
| rowspan="5" | DDR2 / GDDR3
| rowspan="2" | 64
|
| 10
| rowspan="5" | [[PureVideo]] HD with VP2, BSP Engine, and AES128 Engine
|-
! style="text-align:left;" | GeForce 8400M GS
| rowspan="3" | 16:8:4
| 38.4
| 11
|-
! style="text-align:left;" | GeForce 8400M GT
| 450
| 900
| 1200
| rowspan="4" | 256 / 512
| 19.2
| rowspan="4" | 128
|1.8
|3.6
| 43.2
| 14
|-
! style="text-align:left;" | GeForce 8600M GS
| rowspan="3" | NB8P(G84)
| 600
| 1200
| 1400
| 22.4
|2.4
|4.8
| 57.6
| rowspan="2" | 20
|-
! style="text-align:left;" | GeForce 8600M GT
| 475
| 950
| 800 / 1400
| rowspan="2" | 32:16:8
| 12.8 / 22.4
|3.8
|7.6
| 91.2
|-
! style="text-align:left;" | GeForce 8700M GT
| June 2007
| 625
| rowspan="3" | 1250
| rowspan="3" | 1600
| 25.6
| rowspan="3" | GDDR3
|5
|10
| 120
| 29
| rowspan="3" | [[Scalable Link Interface]], [[PureVideo]] HD with VP2, BSP Engine, and AES128 Engine
|-
! style="text-align:left;" | GeForce 8800M GTS
| rowspan="2" | November 2007
| rowspan="2" | NB8P(G92)
| rowspan="2" | 65
| rowspan="2" | PCIe 2.0 x16
| rowspan="2" | 500
| 64:32:16
| rowspan="2" | 512
| rowspan="2" | 51.2
| rowspan="2" | 256
| rowspan="2" |8
|16
| 240
| 50
|-
! style="text-align:left;" | GeForce 8800M GTX
| 96:48:16
|24
| 360
| 65
|}
===GeForce 9M (9xxxM) series===
The GeForce 9M series for notebooks architecture. [[Tesla (microarchitecture)]]
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speed
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! rowspan="2" | Processing power ([[GFLOPS]])
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! [[OpenGL]]
|-
! style="text-align:left;" | GeForce 9100M G <br />mGPU
| rowspan="4" | 2008
| MCP77MH, MCP79MH
| rowspan="2" | 65
| Integrated <br />(PCIe 2.0 x16)
| 450
| 1100
| 1066<br />(system memory)
| rowspan="2" | 8:8:4
| Up to 256 from system memory
| 17.056
| DDR3
| 128
|1.8
|3.6
| 26.4
| rowspan="17" | 10.0
| rowspan="17" | 3.3
| 12
| Similar to 8400M G
|-
! style="text-align:left;" | GeForce 9200M GS
| NB9M-GE(G98)
| rowspan="3" | PCIe 2.0 x16
| 550
| 1300
| 1400
| 256
| 11.2
| rowspan="3" | DDR2/GDDR3
| rowspan="3" | 64
|2.2
|4.4
| 31.2
| rowspan="3" | 13
| rowspan="3" |
|-
! style="text-align:left;" | GeForce 9300M G
| NB9M-GE(G86)
| 80
| 400
| 800
| 1200
| 16:8:4
| rowspan="2" | 256/512
| 9.6
|1.6
|3.2
| 38.4
|-
! style="text-align:left;" | GeForce 9300M GS
| NB9M-GE(G98)
| rowspan="3" | 65
| 550
| 1400
| 1400
| 8:8:4
| 11.2
|2.2
|4.4
| 33.6
|-
! style="text-align:left;" | GeForce 9400M G
| October 15, 2008
| MCP79MX
| Integrated(PCIe 2.0 x16)
| 450
| 1100
| 800<br />1066<br />(system memory)
| 16:8:4
| Up to 256 from system memory
| 12.8<br />17.056
| DDR2<br />DDR3
| rowspan="8" | 128
|1.8
|3.6
| 54
| 12
| PureVideo HD with VP3. Known as the GeForce 9400M in Apple systems<ref>{{cite web |url=http://www.nvidia.com/object/product_geforce_9400m_g_us.html |title=Hardware {{pipe}} GeForce |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20120212161506/http://www.nvidia.com/object/product_geforce_9400m_g_us.html |archive-date=2012-02-12 |url-status=live }}</ref> and [[Nvidia ION]] based systems
|-
! style="text-align:left;" | GeForce 9500M G
| rowspan="6" | 2008
| NB9P(G96)
| PCIe 2.0 x16
| 500
| 1250
| 1600
| 16:8:8
| rowspan="2" | 512
| 25.6
| rowspan="4" | DDR2 / GDDR3
|4
|4
| 60
| rowspan="3" | 20
|
|-
! style="text-align:left;" | GeForce 9500M GS
| NB9P-GV(G96)
| 80
| PCIe x16
| 475
| 950
| 1400
| rowspan="6" | 32:16:8
| 22.4
|3.8
|7.6
| 91.2
|Rebranded 8600M GT
|-
! style="text-align:left;" | GeForce 9600M GS
| NB9P-GE2(G96)
| rowspan="2" | 65
| rowspan="4" | PCIe 2.0 x16
| 430
| 1075
| 800<br />1600
| 1024
| 12.8<br />25.6
|3.44
|6.88
| 103.2
| rowspan="2" |
|-
! style="text-align:left;" | GeForce 9600M GT
| NB9P-GS(G96)
| 500
| rowspan="2" | 1250
| rowspan="9" | 1600
| 512/1024
| rowspan="4" | 25.6
|4
|8
| rowspan="2" | 120
| 23
|-
! style="text-align:left;" | GeForce 9650M GS
| NB9P-GS1(G84)
| 80
| 625
| 512
| rowspan="8" | GDDR3
|5
|10
| 29
|Rebranded 8700M GT
|-
! style="text-align:left;" | GeForce 9650M GT
| NB9P-GT(G96)
| 65/55
| 550
| 1325
| 1024
|4.4
|8.8
| 127.2
| 23
| rowspan="3" |
|-
! style="text-align:left;" | GeForce 9700M GT
| rowspan="2" | July 29, 2008
| NB9E-GE(G96)
| rowspan="3" | 65
| PCIe x16
| 625
| 1550
| rowspan="3" | 512
|5
|10
| 148.8
| 45
|-
! style="text-align:left;" | GeForce 9700M GTS
| NB9E-GS(G94)
| rowspan="5" | PCIe 2.0 x16
| rowspan="2" | 530
| rowspan="2" | 1325
| 48:24:16
| rowspan="5" | 51.2
| rowspan="5" | 256
|8.48
|12.7
| 190.8
| rowspan="2" | 60
|-
! style="text-align:left;" | GeForce 9800M GS
| 2008
| rowspan="2" | NB9E-GT(G94)
| 64:32:16
|8.48
|16.96
| 254
|Down Clocked 9800M GTS Via Firmware
|-
! style="text-align:left;" | GeForce 9800M GTS
| rowspan="3" | July 29, 2008
| rowspan="2" | 65/55
| 600
| 1500
| 64:32:16
| 512 / 1024
|9.6
|19.2
| 288
| 75
|
|-
! style="text-align:left;" | GeForce 9800M GT
| NB9E-GT2(G92)
| rowspan="2" | 500
| rowspan="2" | 1250
| 96:48:16
| 512
| rowspan="2" |8
|24
| 360
| 65
|Rebranded 8800M GTX
|-
! style="text-align:left;" | GeForce 9800M GTX
| NB9E-GTX(G92)
| 65
| 112:56:16
| 1024
|28
| 420
| 75
|
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! rowspan="2" | Processing power ([[GFLOPS]])
! [[Direct3D]]
! [[OpenGL]]
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! colspan=3 | Clock speed
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Supported [[Application programming interface|API]] version
|}
===GeForce 100M (1xxM) series===
The GeForce 100M series for notebooks architecture. [[Tesla (microarchitecture)]] (103M, 105M, 110M, 130M are rebranded GPU i.e. using the same GPU cores of previous generation, 9M, with promised optimisation on other features)
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speed
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! rowspan="2" | Processing power ([[GFLOPS]])
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! [[OpenGL]]
|-
! style="text-align:left;" | GeForce G 102M
| January 8, 2009
| MCP79XT
| rowspan="3" | 65
| Integrated<br />(PCIe 1.0 x16)
| 450?
| 1000
| 800<br />(system memory)
| 16:8:4
| Up to 512 from system memory
| 6.4
| rowspan="2" | DDR2
| rowspan="4" | 64
|1.8
|3.6
| 48
| rowspan="8" | 10.0
| rowspan="8" | 3.3
| rowspan="4" | 14
| PureVideo HD, CUDA, Hybrid SLI, based on GeForce 9400M G
|-
! style="text-align:left;" | GeForce G 103M
| January 1, 2009
| N10M-GE2(G98)
| rowspan="7" | PCIe 2.0 x16
| rowspan="2" | 640
| rowspan="2" | 1600
| 1000
| rowspan="2" | 8:8:4
| 512
| 8
| rowspan="2" |2.56
| rowspan="2" |5.12
| 38
| rowspan="2" | PureVideo HD, CUDA, Hybrid SLI, comparable to the GeForce 9300M GS
|-
! style="text-align:left;" | GeForce G 105M
| rowspan="2" | January 8, 2009
| N10M-GE1(G98)
| 1000<br />1400
|
| 8<br />11
| GDDR2<br />GDDR3
| 38
|-
! style="text-align:left;" | GeForce G 110M
| N10M-GE1(G96b)
| rowspan="5" | 55
| 400
| 1000
| 1000<br />1400
| 16:8:4
| rowspan="5" | 1024
| 8<br />11
| DDR2<br />GDDR3
|1.6
|3.2
| 48
| PureVideo HD, CUDA, Hybrid SLI
|-
! style="text-align:left;" | GeForce GT 120M
| February 11, 2009
| N10P-GV1(G96b)
| 500
| 1250
| 1000
| rowspan="2" | 32:16:8
| 16
| DDR2
| rowspan="2" | 128
|4
|8
| 110
| rowspan="2" | 23
| PureVideo HD, CUDA, Hybrid SLI, Comparable to the 9500M GT and 9600M GT DDR2 (500/1250/400)
|-
! style="text-align:left;" | GeForce GT 130M
| January 8, 2009
| N10P-GE1(G96b)
| 600
| 1500
| 1000<br />1600
| 16<br />25.6
| DDR2<br />GDDR3
|4.8
|9.6
| 144
| PureVideo HD, CUDA, Hybrid SLI, comparable to the 9650M GT
|-
! style="text-align:left;" | GeForce GTS 150M
| rowspan="2" | March 3, 2009
| N10E-GE1(G94b)
| 400
| 1000
| rowspan="2" | 1600
| rowspan="2" | 64:32:16
| rowspan="2" | 51.2
| rowspan="2" | GDDR3
| rowspan="2" | 256
|6.4
|12.8
| 192
| {{unk}}
| rowspan="2" | PureVideo HD, CUDA, Hybrid SLI
|-
! style="text-align:left;" | GeForce GTS 160M
| N10E-GS1(G94b)
| 600
| 1500
|9.6
|19.2
| 288
| 60
|}
===GeForce 200M (2xxM) series===
The GeForce 200M series is a graphics processor architecture for notebooks, [[Tesla (microarchitecture)]]
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speed
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! rowspan="2" | Processing power ([[GFLOPS]])
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! [[OpenGL]]
|-
! style="text-align:left;" | GeForce G210M
| June 15, 2009
| GT218
| 40
| rowspan="9" | PCIe 2.0 x16
| 625
| 1500
| 1600
| 16:8:4
| 512
| 12.8
| GDDR3
| 64
|2.5
|5
| 72
| rowspan="9" | 10.1
| rowspan="9" | 3.3
| rowspan="2" | 14
| Lower clocked versions of the GT218 core is also known as [[Nvidia Ion|Nvidia ION 2]]
|-
! style="text-align:left;" | GeForce GT 220M
| 2009
| G96b
| 55
| rowspan="2" | 500
| 1250
| 1000<br />1600
| 32:16:8
| rowspan="8" | 1024
| 16<br />25.6
| DDR2<br />GDDR3
| rowspan="5" | 128
| rowspan="2" |4
| rowspan="2" |8
| 120
| rebranded 9600M GT @55 nm node shrink
|-
! style="text-align:left;" | GeForce GT 230M
| rowspan="4" | June 15, 2009
| rowspan="2" | GT216
| rowspan="4" | 40
| 1100
| rowspan="2" | 1600
| rowspan="2" | 48:16:8
| rowspan="2" | 25.6
| rowspan="2" | GDDR3
| 158
| rowspan="2" | 23
| rowspan="6" |
|-
! style="text-align:left;" | GeForce GT 240M
| 550
| 1210
|4.4
|8.8
| 174
|-
! style="text-align:left;" | GeForce GTS 250M
| GT215
| 500
| 1250
| 3200
| rowspan="2" | 96:32:8
| 51.2
| rowspan="2" | GDDR5
|4
|16
| 360
| 28
|-
! style="text-align:left;" | GeForce GTS 260M
| GT215
| rowspan="2" | 550
| rowspan="2" | 1375
| 3600
| 57.6
|4.4
|17.6
| 396
| 38
|-
! style="text-align:left;" | GeForce GTX 260M
| rowspan="2" | March 3, 2009
| rowspan="3" | G92b
| rowspan="3" | 55
| rowspan="2" | 1900
| 112:56:16
| rowspan="2" | 60.8
| rowspan="3" | GDDR3
| rowspan="3" | 256
|8.8
|30.8
| 462
| 65
|-
! style="text-align:left;" | GeForce GTX 280M
| 585
| 1463
| rowspan="2" | 128:64:16
|9.36
|37.44
| 562
| rowspan="2" | 75
|-
! style="text-align:left;" | GeForce GTX 285M
| February 2010
| 600
| 1500
| 2000
| 64.0
|9.6
|38.4
| 576
| Higher Clocked Version of GTX280M with new memory
|}
===GeForce 300M (3xxM) series===
The GeForce 300M series for notebooks architecture, [[Tesla (microarchitecture)]]
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
* <sup>2</sup> To calculate the processing power see [[Tesla (microarchitecture)#Performance]].
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speed
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! rowspan="2" | Processing power ([[GFLOPS]])<sup>2</sup>
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! [[OpenGL]]
|-
! style="text-align:left;" | GeForce 305M
| rowspan="2" | January 10, 2010
| rowspan="3" | GT218
| rowspan="10" | 40
| rowspan="10" | PCIe 2.0 x16
| 525
| 1150
| 1400
| rowspan="3" | 16:8:4
| rowspan="3" | 512
| 11.2
| rowspan="3" | DDR3<br />GDDR3
| rowspan="3" | 64
|2.1
|4.2
| 55
| rowspan="10" | 10.1
| rowspan="10" | 3.3
| rowspan="3" | 14
|-
! style="text-align:left;" | GeForce 310M
| 625
| 1530
| rowspan="2" | 1600
| rowspan="2" | 12.8
|2.5
|5
| 73
|-
! style="text-align:left;" | GeForce 315M
| January 5, 2011
| 606
| 1212
|2.42
|4.85
| 58.18
|-
! style="text-align:left;" | GeForce 320M
| April 1, 2010
| MCP89
| 450
| 950
| 1066
| 48:16:8
| 256 (shared w/ system memory) <br /><ref>{{cite web |url=http://www.anandtech.com/show/3762/apples-13inch-macbook-pro-early-2010-reviewed-shaking-the-cpugpu-balance/2 |title=Not Arrandale, but Better Graphics - Apple's 13-inch MacBook Pro (Early 2010) Reviewed: Shaking the CPU/GPU Balance |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20160113203734/http://www.anandtech.com/show/3762/apples-13inch-macbook-pro-early-2010-reviewed-shaking-the-cpugpu-balance/2 |archive-date=2016-01-13 |url-status=dead }}</ref>
| 17.056
| DDR3
| rowspan="7" | 128
|3.6
|7.2
| 136.8
| 20
|-
! style="text-align:left;" | GeForce GT 320M
| January 21, 2010
| rowspan="3" | GT216
| 500
| 1100
| 1580
| 24:8:8
| rowspan="6" | 1024
| 25.3
| rowspan="4" | DDR3<br />GDDR3
|4
|4
| 90
| 14
|-
! style="text-align:left;" | GeForce GT 325M
| rowspan="2" | January 10, 2010
| 450
| 990
| rowspan="3" | 1600
| rowspan="2" | 48:16:8
| rowspan="3" | 25.6
|3.6
|7.2
| 142
| rowspan="2" | 23
|-
! style="text-align:left;" | GeForce GT 330M
| 575
| 1265
|4.6
|9.2
| 182
|-
! style="text-align:left;" | GeForce GT 335M
| rowspan="3" | January 7, 2010
| rowspan="3" | GT215
| 450
| 1080
| 72:24:8
|3.6
|10.8
| 233
| 28?
|-
! style="text-align:left;" | GeForce GTS 350M
| rowspan="2" | 500
| 1249
| 3200
| rowspan="2" | 96:32:8
| 51.2
| rowspan="2" | DDR3<br />GDDR3<br />GDDR5
|4
|16
| 360
| 28
|-
! style="text-align:left;" | GeForce GTS 360M
| 1436
| 3600
| 57.6
|4.4
|17.6
| 413
| 38
|}
===GeForce 400M (4xxM) series===
The GeForce 400M series for notebooks architecture, [[Fermi (microarchitecture)]]
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
* <sup>2</sup> To calculate the processing power see [[Fermi (microarchitecture)#Performance]].
* <sup>3</sup> Each SM in the GF100 also contains 4 texture address units and 16 texture filtering units. Total for the full GF100 64 texture address units and 256 texture filtering units.<ref name="anandtech.com"/> Each SM in the GF104/106/108 architecture contains 8 texture filtering units for every texture address unit. The complete GF104 die contains 64 texture address units and 512 texture filtering units, the complete GF106 die contains 32 texture address units and 256 texture filtering units and the complete GF108 die contains 16 texture address units and 128 texture filtering units.
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speed
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! rowspan="2" | Processing power ([[GFLOPS]])<sup>2</sup>
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! [[OpenGL]]
|-
! style="text-align:left;" | GeForce 410M
| January 5, 2011
| GF119
| rowspan="10" | 40
| rowspan="10" | PCIe 2.0 x16
| 575
| 1150
| rowspan="5" | 1600
| rowspan="2" | 48:8<sup>3</sup>:4
| rowspan="3" | 0.5 and 1
| 12.8
| rowspan="5" | DDR3
| 64
|2.3
|4.6
| 110.4
| rowspan="10" | 12
| rowspan="10" | 4.5
| 12
| rowspan="2" |Similar to Desktop GT420 OEM
|-
! style="text-align:left;" | GeForce GT 415M
| rowspan="7" | September 3, 2010
| rowspan="4" | GF108
| rowspan="2" | 500
| rowspan="2" | 1000
| rowspan="4" | 25.6
| rowspan="4" | 128
| rowspan="2" |2
|4
| 96
| <12(GPU only)<ref name="TDP_400M">{{cite web |url=http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/35860-nvidias-geforce-400m-mobile-gpus-7-new-fermis-introduced-3.html |title=Nvidia's GeForce 400M Mobile GPUs: 7 New Fermis Introduced - Page 3 |website=Hardwarecanucks.com |date=2 September 2010 |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222092048/http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/35860-nvidias-geforce-400m-mobile-gpus-7-new-fermis-introduced-3.html |archive-date=2015-12-22 |url-status=live }}</ref>
|-
! style="text-align:left;" | GeForce GT 420M
| rowspan="3" | 96:16<sup>3</sup>:4
|8
| 192
| 10–23(GPU only)<ref name="TDP_400M"/>
| rowspan="2" |Similar to Desktop GT430
|-
! style="text-align:left;" | GeForce GT 425M
| 560
| 1120
| 1
|2.24
|8.96
| 215.04
| 20–23(GPU only)<ref name="TDP_400M"/>
|-
! style="text-align:left;" | GeForce GT 435M
| 650
| 1300
| 2
|2.6
|10.4
| 249.6
| 32–35(GPU only)<ref name="TDP_400M"/>
|Similar to Desktop GT430/440
|-
! style="text-align:left;" | GeForce GT 445M
| rowspan="2" | GF106
| 590
| 1180
| 1600<br />2500
| 144:24<sup>3</sup>:16<br />144:24<sup>3</sup>:24
| 1<br />1.5
| 25.6<br />60
| DDR3<br />GDDR5
| 128<br />192
|9.44<br />14.16
|14.16
| 339.84
| 30–35(GPU only)<ref name="TDP_400M"/>
|Similar to Desktop GTS450 OEM)
|-
! style="text-align:left;" | GeForce GTX 460M
| 675
| 1350
| rowspan="2" | 2500
| 192:32<sup>3</sup>:24
| rowspan="2" | 1.5
| rowspan="2" | 60
| rowspan="4" | GDDR5
| rowspan="2" | 192
|16.2
|21.6
| 518.4
| rowspan="2" | 45–50(GPU only)<ref name="TDP_400M"/>
|Similar to Desktop GTX550 Ti
|-
! style="text-align:left;" | GeForce GTX 470M
| GF104
| 550
| 1100
| 288:48<sup>3</sup>:24
|13.2
|26.4
| 633.6
|Similar to Desktop GTX 460/560SE
|-
! style="text-align:left;" | GeForce GTX 480M
| May 25, 2010
| GF100
| 425
| 850
| 2400
| 352:44<sup>3</sup>:32
| rowspan="2" | 2
| 76.8
| rowspan="2" | 256
|13.6
|18.7
| 598.4
| rowspan="2" | 100(MXM module)
|Similar to Desktop GTX465
|-
! style="text-align:left;" | GeForce GTX 485M
| January 5, 2011
| GF104
| 575
| 1150
| 3000
| 384:64<sup>3</sup>:32
| 96.0
|18.4
|36.8
| 883.2
|Similar to Desktop GTX560 Ti
|}
===GeForce 500M (5xxM) series===
The GeForce 500M series for notebooks architecture, [[Fermi (microarchitecture)]]
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
* <sup>2</sup> On Some Dell XPS17
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speed
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! rowspan="2" | Processing power ([[GFLOPS]])<sup>2</sup>
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Hertz|MHz]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! [[OpenGL]]
|-
! style="text-align:left;" | GeForce GT 520M
| January 5, 2011
| GF119
| rowspan="10" | 40
| rowspan="10" | PCIe 2.0 x16
| 740
| 1480
| rowspan="2" | 1600
| 48:8:4
| rowspan="4" | 1
| rowspan="2" | 12.8
| rowspan="6" | DDR3
| rowspan="3" | 64
|2.96
|5.92
| 142.08
| rowspan="10" | 12
| rowspan="10" | 4.6
| 12
|Similar to Desktop 510/520
|-
! style="text-align:left;" | GeForce GT 520M
|
| GF108
| 515
| 1030
| 96:16:4
|2.06
|8.24
| 197.76
| rowspan="2" | 20
| Noticed in [[Lenovo]] laptops, similar to Desktop 530/430/440
|-
! style="text-align:left;" | GeForce GT 520MX
| May 30, 2011
| GF119
| 900
| 1800
| rowspan="3" | 1800
| 48:8:4
| 14.4
|3.6
|7.2
| 172.8
|Similar to Desktop 510 & GT520
|-
! style="text-align:left;" | GeForce GT 525M
| rowspan="4" | January 5, 2011
| rowspan="2" | GF108
| 600
| 1200
| rowspan="2" | 96:16:4
| rowspan="3" | 28.8
| rowspan="3" | 128
|2.4
|9.6
| 230.4
| 20–23
|Similar to Desktop GT 530/430/440
|-
! style="text-align:left;" | GeForce GT 540M
| 672
| 1344
| 2<br />1
|2.688
|10.752
| 258.048
| rowspan="2" | 32–35
| rowspan="2" |Similar to Desktop GT 530/440
|-
! style="text-align:left;" | GeForce GT 550M
| GF108<br />GF106<sup>2</sup>
| 740<br />475
| 1480<br />950
| 1800<br />1800
| 96:16:4<br />144:24:16<sup>2</sup>
| 1
|2.96
|11.84
| 284.16<br />312.6
|-
! style="text-align:left;" | GeForce GT 555M
| GF106<br /><br />GF108
| 590<br />650<br />753
| 1180<br />1300<br />1506
| 1800<br />1800<br />3138
| 144:24:24<br />144:24:16<br />96:16:4
| 1.5<br />2<br />1
| 43.2<br />28.8<br />50.2
| DDR3<br />DDR3<br />GDDR5
| 192<br />128<br />128
|14.6<br />10.4<br />3
|14.6<br />15.6<br />12
| 339.84<br />374.4<br />289.15
| 30–35
|Similar to Desktop GT545
|-
! style="text-align:left;" | GeForce GTX 560M
| May 30, 2011
| GF116
| 775
| 1550
| 2500
| 192:32:16<br />192:32:24
| 2<br />1.5, 3
| 40.0<br />60.0
| rowspan="3" | GDDR5
| 128<br />192
|18.6
|24.8
| 595.2
| rowspan="2" | 75
|Similar to Desktop GTX 550Ti
|-
! style="text-align:left;" | GeForce GTX 570M<ref>{{cite web |url=http://www.geforce.com/#/Hardware/GPUs/geforce-gtx-570m/specifications |title=Graphics Cards, Gaming, Laptops, and Virtual Reality from NVIDIA GeForce |access-date=2011-07-01 |archive-url=https://web.archive.org/web/20110629034841/http://www.geforce.com/#/Hardware/GPUs/geforce-gtx-570m/specifications |archive-date=2011-06-29 |url-status=live }}</ref>
| rowspan="2" | June 28, 2011
| rowspan="2" | GF114
| 575
| 1150
| rowspan="2" | 3000
| 336:56:24
| 1.5
| 72.0
| 192
|13.8
|32.2
| 772.8
|Similar to Desktop GTX 560
|-
! style="text-align:left;" | GeForce GTX 580M
| 620
| 1240
| 384:64:32
| 2
| 96.0
| 256
|
|
| 952.3
| 100
|Similar to Desktop GTX 560 Ti
|}
===GeForce 600M (6xxM) series===
{{Further|GeForce 600 series}}
The GeForce 600M series for notebooks architecture, [[Fermi (microarchitecture)]] and [[Kepler (microarchitecture)]]. The processing power is obtained by multiplying shader clock speed, the number of cores, and how many instructions the cores can perform per cycle.
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
*Non GTX Graphics, lack support [[NVENC]]
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2
! rowspan=2 |
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speed
! rowspan="2" | Core
!
! colspan="2" |[[Fillrate]]
! rowspan="2" |
! colspan="3" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 |
|-
!
! Shader ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[
!Texture ([[Texel (graphics)|GT]]/s)
! [[Vulkan]]
! [[Direct3D]]
! [[OpenGL]]
|-
! style="text-align:left;" | GeForce 610M<ref>{{cite web |url=http://www.nvidia.in/object/geforce-610m-in.html#pdpContent=2 |title=GeForce 610M Graphics Card with Optimus technology {{pipe}} Nvidia |website=Nvidia.in |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151208092726/http://www.nvidia.in/object/geforce-610m-in.html#pdpContent=2 |archive-date=2015-12-08 |url-status=live }}</ref>
| December 2011
| GF119 (N13M-GE)
| 40
| rowspan="5" | PCIe 2.0 x16
| 900
| 1800
| 1.8
| 48:8:4
| rowspan="4" | 1<br />2
| 14.4
| rowspan="3" | DDR3
| 64
|3.6
|7.2
| 142.08
| rowspan="5" | n/a
| rowspan="16" | 12
| rowspan="16" | 4.5
| 12
| OEM. Rebadged GT 520MX
|-
! style="text-align:left;" | GeForce GT 620M<ref name="ReferenceA">{{cite web |url=http://www.anandtech.com/show/5697/nvidias-geforce-600m-Series-keplers-and-fermis-and-die-shrinks-oh-my/2 |title=Nvidia's GeForce 600M series: Mobile Kepler and Fermi Die Shrinks |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20150919060349/http://www.anandtech.com/show/5697/nvidias-geforce-600m-series-keplers-and-fermis-and-die-shrinks-oh-my/2 |archive-date=2015-09-19 |url-status=dead }}</ref>
| April 2012
| rowspan="2" | GF117 (N13M-GS)
| rowspan="2" | 28
| rowspan="2" | 625
| rowspan="2" | 1250
| rowspan="2" | 1.8
| rowspan="3" | 96:16:4
| 14.4<br />28.8
| 64<br />128
| rowspan="2" |2.5
| rowspan="2" |10
| 240
| rowspan="2" | 15
| rowspan="2" | OEM. Die-Shrink GF108
|-
! style="text-align:left;" | GeForce GT 625M
| October 2012
| 14.4
| 64
|
|-
! style="text-align:left;" | GeForce GT 630M<ref name="ReferenceA"/><ref>{{cite web |url=http://www.nvidia.in/object/geforce-gt-630m-in.html#pdpContent=2 |title=GeForce GT 630M Graphics Card with Optimus technology {{pipe}} Nvidia |website=Nvidia.in |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151204075305/http://www.nvidia.in/object/geforce-gt-630m-in.html#pdpContent=2 |archive-date=2015-12-04 |url-status=live }}</ref><ref>{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gt-630m/specifications |title=GT 630M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151209063648/http://www.geforce.com/hardware/notebook-gpus/geforce-gt-630m/specifications |archive-date=2015-12-09 |url-status=live }}</ref>
| rowspan="2" | April 2012
| GF108 (N13P-GL)<br />GF117
| 40<br />28
| 660<br />800
| 1320<br />1600
| 1.8<br />4
| 28.8<br />32.0
| DDR3<br />GDDR5
| 128<br />64
|2.6<br />3.2
|10.7<br />12.8
| 258.0<br />307.2
| 33
| GF108: OEM. Rebadged GT 540M<br />GF117: OEM Die-Shrink GF108
|-
! style="text-align:left;" | GeForce GT 635M<ref name="ReferenceA"/><ref>{{cite web |url=http://www.nvidia.in/object/geforce-gt-635m-in.html#pdpContent=2 |title=GeForce GT 635M GPU with Nvidia Optimus technology {{pipe}} Nvidia |website=Nvidia.in |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222133158/http://www.nvidia.in/object/geforce-gt-635m-in.html#pdpContent=2 |archive-date=2015-12-22 |url-status=live }}</ref><ref>{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gt-635m/specifications |title=GT 635M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151219023301/http://www.geforce.com/hardware/notebook-gpus/geforce-gt-635m/specifications |archive-date=2015-12-19 |url-status=live }}</ref>
| GF106 (N12E-GE2)<br />GF116
| 40
| 675
| 1350
| 1.8
| 144:24:24
| 2<br />1.5
| 28.8<br />43.2
| DDR3
| 128<br />192
|16.2
|16.2
| 289.2<br />388.8
| 35
| GF106: OEM. Rebadged GT 555M<br />GF116: 94% of desktop GT640{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GT 640M LE<ref name="ReferenceA"/>
| rowspan="2" | March 22, 2012
| GF108<br />GK107 (N13P-LP)
| 40<br />28
| PCIe 2.0 x16<br />PCIe 3.0 x16
| 762<br />500
| 1524<br />500
| 3.13<br />1.8
| 96:16:4<br /><br />384:32:16<br />(2 SMX)
| rowspan="3" | 1<br />2
| 50.2<br />28.8
| rowspan="4" | DDR3
GDDR5
| rowspan="5" | 128
|3<br />8
|12.2<br />16
| 292.6<br />384
| rowspan="5" | 1.2
| 32<br />20
| GF108: 94% of desktop GT630{{Original research inline|date=June 2015}}<br />GK107: 47% of desktop GTX650{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GT 640M<ref name="ReferenceA"/><ref>{{cite web |url=http://www.anandtech.com/show/5672/acer-aspire-timelineu-m3-life-on-the-kepler-verge |title=Acer Aspire TimelineU M3: Life on the Kepler Verge |website=Anandtech.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222112445/http://www.anandtech.com/show/5672/acer-aspire-timelineu-m3-life-on-the-kepler-verge |archive-date=2015-12-22 |url-status=dead }}</ref>
| rowspan="2" | GK107 (N13P-GS)
| rowspan="4" | 28
| rowspan="4" | PCIe 3.0 x16
| 625
| 625
| 1.8<br />4
| rowspan="4" | 384:32:16<br />(2 SMX)
| rowspan="2" | 28.8<br />64.0
|10
|20
| 480
| rowspan="2" | 32
| 59% of desktop GTX650{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GT 645M
| October 2012
| 710
| 710
| 1.8<br />4
|11.36
|22.72
| 545
| 67% of desktop GTX650{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GT 650M<ref name="ReferenceA"/><ref>{{cite web |author=RPL |url=http://www.laptopreviews.com/hp-lists-new-ivy-bridge-2012-mosaic-design-laptops-available-april-8th-2012-03 |title=HP Lists New Ivy Bridge 2012 Mosaic Design Laptops, Available April 8th |publisher=Laptop Reviews |date=2012-03-18 |access-date=2015-12-11 |url-status=dead |archive-url=https://web.archive.org/web/20130523012343/http://www.laptopreviews.com/hp-lists-new-ivy-bridge-2012-mosaic-design-laptops-available-april-8th-2012-03 |archive-date=May 23, 2013 }}</ref><ref name="ReferenceB">{{cite web |url=http://content.dell.com/us/en/home/d/help-me-choose/hmc-aw-video-card-laptops |title=Help Me Choose: Video Cards {{pipe}} Dell |website=Content.dell.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20121102092044/http://content.dell.com/us/en/home/d/help-me-choose/hmc-aw-video-card-laptops |archive-date=2012-11-02 |url-status=live }}</ref>
| rowspan="2" | March 22, 2012
| GK107 (N13P-GT)
| 745<br />835<br />900*
| 835<br />950<br />900*
| 1.8<br />4<br />5*
| 0.5<br />1<br />2
| 28.8<br />64.0<br />80.0*
|11.9<br />13.4<br />14.4*
|23.8<br />26.7<br />28.8*
| 729.6<br />641.3<br />691.2*
| 45
| 79% of desktop GTX650{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GTX 660M<ref name="ReferenceA"/><ref name="ReferenceB"/><ref>{{cite web |url=https://www.engadget.com/2012/01/08/lenovo-ideapad-laptops-CES-2012/ |title=Lenovo unveils six mainstream consumer laptops (and one desktop replacement) |website=Engadget.com |date=2012-01-08 |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222110754/http://www.engadget.com/2012/01/08/lenovo-ideapad-laptops-CES-2012/ |archive-date=2015-12-22 |url-status=live }}</ref><ref>{{cite web |url=http://forum.notebookreview.com/asus-reviews-owners-lounges/659534-asus-g75vw-ivy-bridge-660m-review-owners-lounge-4.html |title=Asus G75VW Ivy Bridge 660M Review! and owners lounge {{pipe}} Page 4 {{pipe}} NotebookReview |website=Forum.notebookreview.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20150107131543/http://forum.notebookreview.com/asus-reviews-owners-lounges/659534-asus-g75vw-ivy-bridge-660m-review-owners-lounge-4.html |archive-date=2015-01-07 |url-status=live }}</ref>
| GK107 (N13E-GE)
| 835
| 950
| 5
| 2
| 80.0
| rowspan="7" | GDDR5
|15.2
|30.4
| 729.6
| 50
| 79% of desktop GTX650{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GTX 670M<ref name="ReferenceA"/>
| April 2012
| GF114 (N13E-GS1-LP)
| 40
| PCIe 2.0 x16
| 620
| 1240
| 3
| 336:56:24
| rowspan="2" | 1.5<br />3
| 72.0
| rowspan="2" | 192
|14.35
|33.5
| 833
| n/a
| rowspan="2" | 75
| 73% of desktop GTX 560{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GTX [https://www.geforce.com/hardware/notebook-gpus/geforce-gtx-670mx/specifications 670MX]
| October 2012
| GK104 (N13E-GR)
| 28
| PCIe 3.0 x16
| 615
| 615
| 2.8
| 960:80:24<br />(5 SMX)
| 67.2
|14.4
|48.0
| 1181
| 1.2
| 61% of desktop GTX 660{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GTX 675M<ref name="ReferenceA"/>
| April 2012
| GF114 (N13E-GS1)
| 40
| PCIe 2.0 x16
| 632
| 1265
| 3
| 384:64:32
| 2
| 96.0
| rowspan="4" | 256
|19.8
|39.7
| 972
| n/a
| rowspan="3" | 100
| 75% of desktop GTX 560Ti{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GTX [https://www.geforce.com/hardware/notebook-gpus/geforce-gtx-675mx/specifications 675MX]
| October 2012
| GK104 (N13E-GSR)
| rowspan="3" | 28
| rowspan="3" | PCIe 3.0 x16
| 667
| 667
| rowspan="2" | 3.6
| 960:80:32<br />(5 SMX)
| rowspan="3" | 4
| rowspan="2" | 115.2
|19.2
|48.0
| 1281
| rowspan="3" | 1.2
| 61% of desktop GTX 660{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GTX 680M
| June 4, 2012
| GK104 (N13E-GTX)
| rowspan="2" | 719
| rowspan="2" | 719
| 1344:112:32<br />(7 SMX)
| rowspan="2" |23
|80.6
| 1933
| 78% of desktop GTX 670{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GTX 680MX
| October 23, 2012
| GK104
| 5
| 1536:128:32<br />(8 SMX)
| 160
|92.2
| 2209
| 122
| 72% of desktop GTX 680{{Original research inline|date=June 2015}}
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speed
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! rowspan="2" | Processing power ([[GFLOPS]])<sup>2</sup>
! colspan="3" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Vulkan]]
! [[Direct3D]]
! [[OpenGL]]
|}
===GeForce 700M (7xxM) series===
{{Further|GeForce 700 series|Kepler (microarchitecture)}}
The GeForce 700M series for notebooks architecture. The processing power is obtained by multiplying shader clock speed, the number of cores, and how many instructions the cores can perform per cycle.
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
*Non GTX variants lack [[NVENC]] support
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speed
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! rowspan="2" | Processing power ([[GFLOPS]])<sup>2</sup>
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Vulkan]]
! [[Direct3D]]
! [[OpenGL]]
! [[CUDA]]
|-
! style="text-align:left;" | GeForce 710M
| January 2013
| GF117
| rowspan="15" | 28
| PCIe 2.0 x16
| 800
| 1600
| rowspan="2" | 1.8
| 96:16:4
| 1<br />2
| rowspan="2" | 14.4
| rowspan="7" | DDR3
| rowspan="4" | 64
|3.2
|12.8
| 307.2
| n/a
| rowspan="15" | 12
| rowspan="15" | 4.5
|2.1<ref name="cuda-gpu-cap">{{cite web |url=https://developer.nvidia.com/cuda-gpus |title=CUDA GPUs - Compute Capacity {{pipe}} NVIDIA Developer |date=4 June 2012 |publisher=Nvidia |access-date=2023-04-11 |archive-date=7 June 2023 |archive-url=https://web.archive.org/web/20230607011105/https://developer.nvidia.com/cuda-gpus |url-status=live }}</ref>
| 12
| OEM. About 115% of Mobile 620 & Desktop 530{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce 710M
| July 24, 2013
| GK208
| PCIe 3.0 x8
| 719
| ?
| 192:16:8
| 1
|5.752
|11.5
| 276.1
| 1.2
|3.5<ref name="techpowerup-gpu-gk208">{{cite web |url=https://www.techpowerup.com/gpu-specs/nvidia-gk208.g572 |title=NVIDIA GK208 GPU Specs |publisher=TechPowerUP |access-date=2023-04-11 }}</ref>
| 15
|Kepler, similar to 730M with half of the cores disabled
|-
! style="text-align:left;" | GeForce GT 720M
| April 1, 2013
| GF117
| PCIe 2.0 x16
| 938
| 1876
| rowspan="4" | 2
| 96:16:4
| rowspan="5" | 2
| 16.0
|3.8
|15.0
| 360.19
| n/a
|2.1<ref name="cuda-gpu-cap" />
| ?
| OEM. About 130% of Mobile 625/630 & Desktop 620{{Original research inline|date=June 2015}}
|-
!style="text-align:left;" | GeForce GT 720M
|December 25, 2013
|GK208
|PCIe 2.0 x8
| colspan="2" |719
|192:16:8
|12.8
|3.032
|12.13
|291
| rowspan="12" | 1.2
| rowspan="4" | 3.5<ref name="techpowerup-gpu-gk208" />
|22
|Kepler, similar to 730M with half of the cores disabled
|-
! style="text-align:left;" | GeForce GT 730M
| January 2013
| rowspan="3" | GK208
| rowspan="3" | PCIe 3.0 x8
| colspan="2" | 719
| rowspan="3" | 384:32:8<br />(2 SMX)
| rowspan="2" |16.0
| 128
|5.8
|23.0
| 552.2
| 33
| Kepler, similar to Desktop GT640
|-
! style="text-align:left;" | GeForce GT 735M
| rowspan="5" | April 1, 2013
| colspan="2" | 889
| rowspan="2" | 64
|7.11
|28.4
| 682.8
| rowspan="2" | ?
| Kepler, similar to Desktop GT640
|-
! style="text-align:left;" | GeForce GT 740M
| colspan="2" | 980
| 1.8
| 14.4
|7.84
|31.4
| 752.6
| Kepler, similar to Desktop GT640.
|-
! style="text-align:left;" | GeForce GT 740M
| rowspan="4" | GK107
| rowspan="8" | PCIe 3.0 x16
| colspan="2" | 810<ref name="740M Notebookcheck" />
| 1.8<br />5
| rowspan="4" | 384:32:16<br />(2 SMX)
| 2<ref name="740M Notebookcheck">{{cite web |url=http://www.notebookcheck.net/Nvidia-GeForce-GT-740M.89900.0.html |title=Nvidia GeForce GT 740M - NotebookCheck.net Tech |website=Notebookcheck.net |date=2013-03-17 |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151217072323/http://www.notebookcheck.net/NVIDIA-GeForce-GT-740M.89900.0.html |archive-date=2015-12-17 |url-status=live }}</ref>
| 28.8<br />80
| DDR3<br />GDDR5<ref name="740M Notebookcheck" />
| rowspan="6" | 128
|12.96
|25.92
| 622.1
| rowspan="8" | 3.0<ref name="cuda-gpu-cap" />
| rowspan="2" | 45
| about 76% of Desktop GTX650{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GT 745M
| colspan="2" | 837
| rowspan="2" | 2<br />5
| rowspan="5" | 2
| rowspan="2" | 32<br />80
| rowspan="2" | DDR3<br />GDDR5
|13.4
|26.8
| 642.8
| about 79% of Desktop GTX650{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GT 750M
| colspan="2" | 967
|15.5
|30.9
| 742.7
| rowspan="2" | 50
| about 91% of Desktop GTX650{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GT 755M<ref>{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gt-755m/specifications |title=GT 755M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151208161237/http://www.geforce.com/hardware/notebook-gpus/geforce-gt-755m/specifications |archive-date=2015-12-08 |url-status=live }}</ref>
| ?
| colspan="2" | 1020
| 5.4
| 86.4
| rowspan="5" | GDDR5
|15.7
|31.4
| 783
| about 93% of Desktop GTX650{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GTX 760M
| rowspan="4" | May 2013
| rowspan="3" | GK106
| colspan="2" | 719
| rowspan="3" | 4
| rowspan="2" | 768:64:16<br />(4 SMX)
| rowspan="2" | 64
|10.5
|42.1
| 1104
| 55
| about 71% of Desktop GTX 650Ti{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GTX 765M
| colspan="2" | 863
|13.6
|54.4
| 1326
| 65
| about 92% of Desktop GTX 650Ti{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GTX 770M
| colspan="2" rowspan="2" | 797
| 960:80:24<br />(5 SMX)
| 3
| 96
| 192
|19.5
|64.9
| 1530
| 75
| about 83% of Desktop GTX660{{Original research inline|date=June 2015}}
|-
! style="text-align:left;" | GeForce GTX 780M
| GK104
| 5
| 1536:128:32<br />(8 SMX)
| 4
| 160
| 256
|26.3
|105.3
| 2448
| 122
| about 78% of Desktop GTX770{{Original research inline|date=June 2015}}
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speed
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! rowspan="2" | Processing power ([[GFLOPS]])<sup>2</sup>
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Core ([[Hertz|MHz]])
! Shader ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Vulkan]]
! [[Direct3D]]
! [[OpenGL]]
! [[CUDA]]
|}
===GeForce 800M (8xxM) series===
{{Further|GeForce 800M series}}
The GeForce 800M series for notebooks architecture. The processing power is obtained by multiplying shader clock speed, the number of cores, and how many instructions the cores can perform per cycle.
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
*810M to 845M lack [[NVENC]] support
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speed
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! rowspan="2" | Processing power ([[GFLOPS]])<sup>2</sup>
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes (original research)
|-
! Core ([[Hertz|MHz]])
! [[Shader]] ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Vulkan]]
! [[Direct3D]]
! [[OpenGL]]
! [[CUDA]] Compute Capability
|-
! style="text-align:left;" | GeForce 810M
| rowspan="2" | February 2014
| rowspan="2" | GF117
| rowspan="13" | 28
| rowspan="2" | PCIe 2.0 x16
| 738–888
| 1476–1776
| 1.8
| 48:8:4
| 1
| 14.4
| rowspan="5" | DDR3
| rowspan="7" | 64
|2.95–3.55
|5.9–7.1
| 141.7–170.5
| rowspan="2" | n/a
| rowspan="13" | 12
| rowspan="13" | 4.5
| rowspan="2" | 2.1<ref name="cuda-gpu-cap" />
| 15
|
|-
! style="text-align:left;" | GeForce 820M<ref>{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-820m/specifications |title=820M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151219131829/http://www.geforce.com/hardware/notebook-gpus/geforce-820m/specifications |archive-date=2015-12-19 |url-status=live }}</ref>
| 719–954
| 1438–1908
| 2
| 96:16:4
| rowspan="3" | 2
| 16
|2.9–3.8
|11.5–15.3
| 276.1–366.3
| 15<ref>{{cite web |url=http://www.techpowerup.com/gpudb/2524/geforce-820m.html |title=Nvidia GeForce 820M {{pipe}} techPowerUp GPU Database |website=Techpowerup.com |access-date=2015-12-11 |archive-url=https://archive.today/20141230083634/http://www.techpowerup.com/gpudb/2524/geforce-820m.html |archive-date=2014-12-30 |url-status=live }}</ref>
| 115% of 620 (Fermi)
|-
! style="text-align:left;" | GeForce 825M<ref>{{cite web |url=http://www.notebookcheck.net/NVIDIA-GeForce-825M.110063.0.html |title=Nvidia GeForce 825M - NotebookCheck.net Tech |website=Notebookcheck.net |date=2014-03-13 |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151126061735/http://www.notebookcheck.net/NVIDIA-GeForce-825M.110063.0.html |archive-date=2015-11-26 |url-status=live }}</ref>
| January 27, 2014
| GK208
| PCIe 3.0 x8
| colspan="2" | 850
| rowspan="2" | 1.8
| 384:16:8<br />(2 SMX)
| 14.4
|6.8
|13.6
| 652.8
| 1.2
| 3.5<ref name="techpowerup-gpu-gk208" />
| 33
| 94% of 630 (Kepler)
|-
! style="text-align:left;" | GeForce 830M<ref>{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-830m/specifications |title=830M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151210060525/http://www.geforce.com/hardware/notebook-gpus/geforce-830m/specifications |archive-date=2015-12-10 |url-status=live }}</ref>
| rowspan="2" | March 12, 2014
| rowspan="3" | GM108
| rowspan="10" | PCIe 3.0 x16
| colspan="2" rowspan="2" | 1029
| 256:16:8<br />(2 SMM)
| 14.4
|8.2
|16.5
| 526.8
| rowspan="7" | 1.3
| rowspan="7" | 5.0<ref name="cuda-gpu-cap" />
| ~25
|50% of 750 (Maxwell)
|-
! style="text-align:left;" | GeForce 840M<ref>{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-840m/specifications |title=840M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151217191234/http://www.geforce.com/hardware/notebook-gpus/geforce-840m/specifications |archive-date=2015-12-17 |url-status=live }}</ref>
| 2
| 384:24:8<br />(3 SMM)
| 2–4
| 16
|8.2
|24.7
| 790.3
| 30
| 50–80% of 745 (Maxwell)
|-
! rowspan="2" style="text-align:left;" | GeForce 845M<ref>{{cite web |url=http://www.asus.com/Notebooks_Ultrabooks/N551JQ/ |title=N551JQ {{pipe}} Notebooks {{pipe}} ASUS Global |website=Asus.com |date=2012-05-29 |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20150319083722/http://www.asus.com/Notebooks_Ultrabooks/N551JQ/ |archive-date=2015-03-19 |url-status=live }}</ref><ref>{{cite web|url=https://www.notebookcheck.net/Dell-Inspiron-17-7746-2015-Notebook-Review.144300.0.html|title=Dell Inspiron 17 7746 (2015) Notebook Review|last=Jentsch|first=Sebastian|website=Notebookcheck|date=9 June 2015 |language=en|access-date=2019-03-19|archive-url=https://web.archive.org/web/20170816125140/https://www.notebookcheck.net/Dell-Inspiron-17-7746-2015-Notebook-Review.144300.0.html|archive-date=2017-08-16|url-status=live}}</ref>
| February 7, 2015<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-845m.c2660|title=NVIDIA GeForce 845M Specs|website=TechPowerUp|language=en|access-date=2019-03-19}}</ref>
| colspan="2" | 1071–1150
| 5
| 384:32:16<br />(3 SMM)
| rowspan="2" | 2
| 40
| GDDR5
|18.8
|37.6
| 903.2
| 33
|
|-
|August 16, 2015<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-845m.c2753|title=NVIDIA GeForce 845M Specs|website=TechPowerUp|language=en|access-date=2019-03-19}}</ref>
| rowspan="4" |GM107
| colspan="2" |863
|2
|512:32:16<br />(4 SMM)
|16
|DDR3
|13.8
|27.6
|883.7
|45
|
|-
! rowspan="2" style="text-align:left;" | GeForce GTX 850M<ref>{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-850m/specifications |title=GTX 850M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151206080014/http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-850m/specifications |archive-date=2015-12-06 |url-status=live }}</ref>
| rowspan="6" | March 12, 2014
| colspan="2" | 876+Boost
| 5
| rowspan="2" | 640:40:16<br />(5 SMM)
| rowspan="2" | 2–4
| 80.2
| GDDR5
| rowspan="4" | 128
|14.0
|35.0
| 1121.3
| rowspan="2" | 40
| 80% of 750Ti
|-
| colspan="2" | 936+Boost
| 2
| 32
| DDR3
|15.0
|37.4
| 1198.1
| 85% of 750Ti
|-
! rowspan=2 style="text-align:left;" | GeForce GTX 860M<ref>{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-860m/specifications |title=GTX 860M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151204182454/http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-860m/specifications |archive-date=2015-12-04 |url-status=live }}</ref>
| colspan="2" | 1029–1085
| rowspan="4" | 5
| 640:40:16<br />(5 SMM)
| 2
| rowspan=2 | 80
| rowspan="4" | GDDR5
|16.5
|41.2
| 1389
| 40–45
| equal to 750Ti
|-
| rowspan="3" | GK104
| colspan="2" | 797–915
| 1152:96:16<br />(6 SMX)
| 4
|12.8
|76.5
| 2108
| rowspan="3" | 1.2
| rowspan="3" | 3.0<ref name="cuda-gpu-cap" />
| 75
| similar to 660 OEM.
|-
! style="text-align:left;" | GeForce GTX 870M<ref>{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-870m/specifications |title=GTX 870M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151219023316/http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-870m/specifications |archive-date=2015-12-19 |url-status=live }}</ref>
| colspan="2" | 941–967
| 1344:112:24<br />(7 SMX)
| 3, 6
| 120
| 192
|22.6
|105.4
| 2599
| 110
| 105% of 660Ti
|-
! style="text-align:left;" | GeForce GTX 880M<ref>{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-880m/specifications |title=GTX 880M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151210054953/http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-880m/specifications |archive-date=2015-12-10 |url-status=live }}</ref>
| colspan="2" | 954–993
| 1536:128:32<br />(8 SMX)
| 4, 8
| 160
| 256
|30.5
|122.1
|
|
| 90% of 770
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speed
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! rowspan="2" | Processing power ([[GFLOPS]])<sup>2</sup>
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes (original research)
|-
! Core ([[Hertz|MHz]])
! [[Shader]] ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Vulkan]]
! [[Direct3D]]
! [[OpenGL]]
! [[CUDA]] Compute Capability
|}
===GeForce 900M (9xxM) series===
{{Further|GeForce 900 series}}
The GeForce 900M series for notebooks architecture. The processing power is obtained by multiplying shader clock speed, the number of cores, and how many instructions the cores can perform per cycle.
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
*920M to 940M lack [[NVENC]] support
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speed
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! rowspan="2" | Processing power ([[GFLOPS]])<sup>2</sup>
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Min ([[Hertz|MHz]])
! Average ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Vulkan]]
! [[Direct3D]]
! [[OpenGL]]
! [[OpenCL]]
|-
!GeForce 910M
|March 13,
2015
|GK208B
|28
|PCIe 3.0 x8
|641
|
|
|384:32:8
(2 SMX)
|2
|16.02
|DDR3
|64
|5.128
|20.51
|492.3
|1.2
|12 (11_0)
|4.6
|1.2
|33
|
|-
! style="text-align:left;" | GeForce 920M
| rowspan="3" | March 12, 2015
| GK208
| rowspan="12" | 28
| rowspan="12" | PCIe 3.0 x16
| 954
| {{unk}}
| rowspan="2" | 1.8
| 384:32:8<br />(2 SMX)
| rowspan="5" | 2
| rowspan="2" | 14.4
| rowspan="4" | DDR3
| rowspan="5" | 64
|7.6
|30.5
| 733
| 1.2
| rowspan="8" | 12 (11_0)
| rowspan="12" | 4.6
| rowspan="12" | 1.2
| 33
| rowspan="5" |
|-
! style="text-align:left;" | GeForce 930M
| rowspan="3" | GM108
| 928
| 941
| rowspan="3" | 384:24:8<br />(3 SMM)
|7.4
|22.3
| 713
| rowspan="11" | 1.3
| 29<ref>{{cite web |url=http://www.aetina.com.tw/lang/en/products/embeddedgraphicscard/mxm%20/m3n930m-en/ |archive-url=https://web.archive.org/web/20150703062135/http://www.aetina.com.tw/lang/en/products/embeddedgraphicscard/mxm%20/m3n930m-en/ |url-status=dead |archive-date=2015-07-03 |title=M3N930M-EN |publisher=Aetina |access-date=2015-12-11 }}</ref>
|-
! style="text-align:left;" | GeForce 940M
| 1072
| 1176
| rowspan="2" | 2
| rowspan="2" | 16
|8.6
|25.7
| 823
| 36<ref>{{cite web |url=http://www.aetina.com.tw/lang/en/products/embeddedgraphicscard/mxm%20/m3n940m-fn/ |archive-url=https://web.archive.org/web/20150703051250/http://www.aetina.com.tw/lang/en/products/embeddedgraphicscard/mxm%20/m3n940m-fn/ |url-status=dead |archive-date=2015-07-03 |title=M3N940M-FN |publisher=Aetina |access-date=2015-12-11 }}</ref>
|-
! rowspan="2" style="text-align:left;" | GeForce 940MX
| January, 2016<ref>{{cite web |url=https://www.techpowerup.com/gpu-specs/geforce-940mx.c2797 |title=Nvidia GeForce 940MX Specs |publisher=TechPowerUp |access-date=2021-03-27 }}</ref>
| 1004
| 1242
|9.9
|29.8
| 954
| rowspan="2" | 23
|-
| June 28, 2016<ref>{{cite web |url=https://www.techpowerup.com/gpu-specs/geforce-940mx.c2845 |title=Nvidia GeForce 940MX Specs |publisher=TechPowerUp |access-date=2020-07-22 }}</ref>
| rowspan="4" | GM107
| 795
| 861
| rowspan="2" | 5
| 512:32:8<br />(4 SMM)
| 40
| rowspan="2" | GDDR5
|6.9
|27.6
| 882
|-
! rowspan="2" style="text-align:left;" | GeForce GTX 950M
| rowspan="3" | March 12, 2015
| rowspan="2" | 914
| rowspan="2" {{unk}}
| rowspan="3" | 640:40:16<br />(5 SMM)
| rowspan="4" | 2, 4
| 80
| rowspan="4" | 128
| rowspan="2" |14.6
| rowspan="2" |36.6
| 1170
| {{unk}}
| rowspan="3" | Similar core config to GTX 750 Ti (GM107-400-A2)
|-
| 2
| 32
| DDR3
|
| 55<ref>{{cite web |url=http://www.aetina.com.tw/lang/en/products/embeddedgraphicscard/mxm/m3n950m-fn/ |archive-url=https://web.archive.org/web/20150316002130/http://www.aetina.com.tw/lang/en/products/embeddedgraphicscard/mxm/m3n950m-fn/ |url-status=dead |archive-date=2015-03-16 |title=M3N950M-FN |publisher=Aetina |access-date=2015-12-11 }}</ref>
|-
! style="text-align:left;" | GeForce GTX 960M
| 1097
| 1176
| rowspan="4" | 5
| rowspan="2" | 80
| rowspan="5" | GDDR5
|17.5
|43.8
| 1403
| 65<ref>{{cite web |url=http://www.aetina.com.tw/lang/en/products/embeddedgraphicscard/mxm/m3n960m-jn/ |archive-url=https://web.archive.org/web/20150315235253/http://www.aetina.com.tw/lang/en/products/embeddedgraphicscard/mxm/m3n960m-jn/ |url-status=dead |archive-date=2015-03-15 |title=M3N960M-JN |publisher=Aetina |access-date=2015-12-11 }}</ref>
|-
! style="text-align:left;" | GeForce GTX 965M<ref>{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-965m |title=GTX 965M |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151208060211/http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-965m |archive-date=2015-12-08 |url-status=live }}</ref>
| January 5, 2015
| rowspan="4" | GM204
| rowspan="2" | 944
| {{unk}}
| 1024:64:32<br />(8 SMM)
|30.2
|60.4
| 1933
| rowspan="4" | 12 (12_1)
| 60<ref>{{cite web |url=http://www.eurocom.com/ec/configure%281,257,0%29ec |title=Eurocom Configure Model |website=Eurocom.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151127053257/http://www.eurocom.com/ec/configure(1,257,0)ec |archive-date=2015-11-27 |url-status=live }}</ref>
| Similar core config to GTX 960 (GM206-300)
|-
! style="text-align:left;" | GeForce GTX 970M<ref>{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-970m/specifications |title=GTX 970M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151209063713/http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-970m/specifications |archive-date=2015-12-09 |url-status=live }}</ref>
| rowspan="2" | October 7, 2014
| 993
| 1280:80:48<br />(10 SMM)
| 3, 6
| 120
| 192
|44.4
|73.9
| 2365
| 75
| Similar core config to GTX 960 OEM (GM204)
|-
! style="text-align:left;" | GeForce GTX 980M<ref>{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-980m/specifications |title=GTX 980M {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151213131803/http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-980m/specifications |archive-date=2015-12-13 |url-status=live }}</ref>
| 1038
| 1127
| 1536:96:64<br />(12 SMM)
| 4, 8
| 160
| rowspan="2" | 256
|66.4
|99.6
| 3189
| 100
| Similar core config to GTX 970 (GM204-200) with one SMM disabled
|-
! style="text-align:left;" | GeForce GTX 980<ref>{{cite web |url=http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-980/specifications |title=GTX 980 {{pipe}} Specifications |publisher=GeForce |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151205174022/http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-980/specifications |archive-date=2015-12-05 |url-status=live }}</ref>
| September 22, 2015
| 1064
| {{unk}}
| 7.01
| 2048:128:64<br />(16 SMM)
| 8
| 224
|68.1
|136.2
| 4358
| 165, oc to 200
| Similar to Desktop GTX 980
|-
! rowspan="2" | Model
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speed
! rowspan="2" | Core config<sup>1</sup>
! colspan="4" | Memory
! colspan="2" |[[Fillrate]]
! rowspan="2" | Processing power ([[GFLOPS]])<sup>2</sup>
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan="2" | [[Thermal design power|TDP]] (Watts)
! rowspan="2" | Notes
|-
! Min ([[Hertz|MHz]])
! Average ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Vulkan]]
! [[Direct3D]]
! [[OpenGL]]
! [[OpenCL]]
|}
===GeForce 10 series===
{{Further|GeForce 10 series|Pascal (microarchitecture)}}
* [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
*Improved NVENC (Better support for H265, VP9,...)
* Supported [[Application programming interface|APIs]]: [[Direct3D]] 12 (12_1), [[OpenGL]] 4.6, [[OpenCL]] 3.0, [[Vulkan (API)|Vulkan]] 1.3 and [[CUDA]] 6.1<ref name="cuda-gpu-cap" />
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan="2" | Model{{spaces}}name
! rowspan="2" | Launch
! rowspan="2" | [[Code name]]
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" | Transistors (billion)
! rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" | Clock speeds
! rowspan="2" | Core config
! colspan="4" | Memory
! colspan="2" | [[Fillrate]]
! colspan="3" | Processing power (GFLOPS)
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan="2" | [[Thermal Design Power|TDP]] (Watts)
! rowspan="2" | [[Scalable Link Interface|SLI]] support
|-
! Base core clock ([[Hertz|MHz]])
! Boost core clock ([[Hertz|MHz]])
! Memory ([[Transfers per second|GT/s]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! [[Half precision floating-point format|Half precision]]
! [[Single precision floating-point format|Single precision]] (Boost)
! [[Double precision floating-point format|Double precision]]
! [[DirectX]]
! [[OpenGL]]
! [[Vulkan (API)|Vulkan]]
! [[OpenCL]]
|-
! style="text-align:left;" | GeForce GTX 1050 (Notebook)<ref name="GeForce GTX 10-Series Notebooks">{{cite web|url=http://www.geforce.com/hardware/10series/notebook|title=GeForce GTX 10-Series Notebooks|work=geforce.com|access-date=2016-08-16|archive-url=https://web.archive.org/web/20161021192100/http://www.geforce.com/hardware/10series/notebook|archive-date=2016-10-21|url-status=live}}</ref><ref name="1050Mobile16ROP">{{cite web |url=https://www.anandtech.com/show/10980/nvidia-launches-geforce-gtx-1050-ti-gtx-1050-for-laptops |title=NVIDIA Launches GeForce GTX 1050 TI & GTX 1050 For Laptops |website=anandtech.com |access-date=2018-12-27 |archive-url=https://web.archive.org/web/20181227230413/https://www.anandtech.com/show/10980/nvidia-launches-geforce-gtx-1050-ti-gtx-1050-for-laptops |archive-date=2018-12-27 |url-status=dead }}</ref>
| rowspan="2" | January 3, 2017
| GP107 (N17P-G0-A1)
| rowspan="2" | [[14 nm]]
| rowspan="2" | 3.3
| rowspan="2" | 135
| rowspan="9" | PCIe 3.0 x16
| 1354
| 1493
| rowspan="2" | 7
| 640:40:16
| rowspan="2" | 4
| rowspan="2" | 112
| rowspan="7" | [[GDDR5]]
| rowspan="2" | 128
| 21.7
| 54.2
| 14
| 1733 <br>(1911)
| 27
| 12 <br>([[Feature levels in Direct3D#Direct3D 12|12_1]])
| rowspan="9" | 4.5
| rowspan="9" | 1.3
| rowspan="9" | 1.2
| 53
| rowspan="5" {{No}}
|-
! style="text-align:left;" | GeForce GTX 1050 Ti (Notebook)<ref name="GeForce GTX 10-Series Notebooks" />
| GP107 (N17P-G1-A1)
| 1493
| 1620
| 768:48:32
| 47.8
| 71.7
| 18
| 2293 <br>(2488)
| 36
|
| 64
|
! style="text-align:left;" | GeForce GTX 1060 (Notebook)<ref name="GeForce GTX 10-Series Notebooks" />
| August 16,{{spaces}}2016
| rowspan="2" | GP106
| rowspan="7" | 16{{spaces}}nm
| rowspan="3" | 4.4
| rowspan="3" | 200
| 1404
| 1670
| rowspan="5" | 8
| rowspan="3" | 1280:80:48
| rowspan="3" | 6
| rowspan="3" | 192
| rowspan="3" | 192
| 67.4
| 112
| 56
| 3594 <br>(4275)
| 112
|
| rowspan="3" | 80
|-
! rowspan="2" style="text-align:left;" | GeForce GTX 1060 Max-Q
| rowspan="2" |May 2017
| 1063
| rowspan="2" |1480
| rowspan="2" |71.04
| rowspan="2" |118.4
| 59.20
| rowspan="2" |3789
| rowspan="2" |118.4
|
|-
| GP106B
| 1265
|
|
|-
! style="text-align:left;" | GeForce GTX 1070 (Notebook)<ref name="GeForce GTX 10-Series Notebooks" />
| August 16, 2016
| rowspan="4" | GP104/ GP104B<ref name="NVdevsuplist" />
| rowspan="4" | 7.2
| rowspan="4" | 314
| 1442
| 1645
| rowspan="2" | 2048:128:64
| rowspan="4" | 8
| rowspan="2" | 256
| rowspan="4" | 256
| 92.3
| 185
| 92
| 5906 <br>(6738)
| 185
|
| 115
| rowspan=1 {{Yes}}
|-
! style="text-align:left;" | GeForce GTX 1070 Max-Q
| May 2017
| 1101
| 1379
| 88.26
| 176.5
| 88.26
| 5648
| 176.5
|
| ?
| rowspan=1 {{No}}
|-
! style="text-align:left;" | GeForce GTX 1080 (Notebook)<ref name="GeForce GTX 10-Series Notebooks" />
| August 16, 2016
| 1556
| 1733
| rowspan="2" | 10
| rowspan="2" | 2560:160:64
| rowspan="2" | 320
| rowspan="2" |[[GDDR5X]]
| 99.6
| 249
| 124
| 7967 <br>(8873)
| 249
|
| 150
| rowspan=1 {{Yes}}
|-
! style="text-align:left;" | GeForce GTX 1080 Max-Q
| May 2017
| 1101
| 1468
| 93.95
| 234.9
| 117.4
| 7516
| 234.9
|
| ?
| rowspan=1 {{No}}
|-
|}
===GeForce 16 series===
{{Further|GeForce 16 series|Turing (microarchitecture)}}
* Supported [[Application programming interface|APIs]]: [[Direct3D]] 12 (12_1), [[OpenGL]] 4.6, [[OpenCL]] 3.0, [[Vulkan (API)|Vulkan]] 1.3 and [[CUDA]] 7.5, improve [[Nvidia NVENC|NVENC]]
*No [[Scalable Link Interface|SLI]], no TensorCore, and no [[Ray tracing (graphics)|Raytracing]] hardware acceleration.
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|+
! rowspan="2" |Model
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]
! rowspan="2" |Process
! rowspan="2" |Transistors (billion)
! rowspan="2" |Die size (mm<sup>2</sup>)
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="2" |Clock speeds
! rowspan="2" |Core config
! rowspan="2" |[[GPU cache|L2]]
[[GPU cache|Cache]]([[Mebibyte|MiB]])
!Memory ([[Transfers per second|GT/s]])
! colspan="4" |Memory
! colspan="2" |[[Fillrate]]
! colspan="3" |Processing power ([[FLOPS|GFLOPS]])
! rowspan="2" |[[Thermal Design Power|TDP]] (Watts)
|-
!Base core clock ([[Hertz|MHz]])
!Boost core clock ([[Hertz|MHz]])
!
!Size ([[Gibibyte|GiB]])
!Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
!Bus type
!Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
![[Half precision floating-point format|Half precision]]
![[Single precision floating-point format|Single precision]]
![[Double precision floating-point format|Double precision]]
|-
!Geforce GTX 1630
|Jun 28, 2022
| rowspan="2" |TU117
| rowspan="8" |TSMC 12FFN
| rowspan="5" |4.7
| rowspan="5" |200
| rowspan="8" |PCIe 3.0
x16
|1740
|1785
|512:32:16
| rowspan="5" |1.0
|12
|4
|96
|GDDR6
|64
|28.56
|57.12
|3.656
|1828
|57.12
|75
|-
!GeForce GTX 1650 (Laptop)
| rowspan="2" |April 23, 2019
|1395
|1560
| rowspan="4" |1024:64:32
| rowspan="2" |8
| rowspan="2" |4
|128
| rowspan="2" |GDDR5
| rowspan="4" |128
|49.92
|99.84
|6390
|3195
|99.84<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-gtx-1650-mobile.c3367|title=NVIDIA GeForce GTX 1650 Mobile Specs|website=TechPowerUp|language=en|access-date=2020-03-02}}</ref>
|50
|-
!GeForce GTX 1650 Max-Q
| TU117(N18P-G0-MP-A1)
|1020
|1245
|112
|39.84
|79.68
|5100
|2550
|79.68<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-gtx-1650-max-q.c3383|title=NVIDIA GeForce GTX 1650 Max-Q Specs|website=TechPowerUp|language=en|access-date=2020-03-02}}</ref>
|30
|-
!GeForce GTX 1650 Ti Max-Q
| rowspan="2" |April 2, 2020
| TU117
|1035
|1200
| rowspan="2" |12
| rowspan="2" |4
| rowspan="2" |192
| rowspan="5" |GDDR6
|38.4
|76.8
| 4915
| 2458
| 76.8
| 35
|-
!GeForce GTX 1650 Ti
| TU117(N18P-G62-A1)
|1350
|1485
|47.52
|95.04
| 6083
| 3041
| 95.04
| 55
|-
!GeForce GTX 1660 (Laptop)
|?
| rowspan="3" |TU116
| rowspan="3" |6.6
| rowspan="3" |284
|1455
|1599
|1408:88:48
| rowspan="3" |1.5
|16
| rowspan="3" |6
|384
| rowspan="3" |192
|76.32
|127.2
|8141
|4070
|127.2<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-gtx-1660-mobile.c3394|title=NVIDIA GeForce GTX 1660 Mobile Specs|website=TechPowerUp|language=en|access-date=2020-03-02}}</ref>
|?
|-
!GeForce GTX 1660 Ti Max-Q
| rowspan="2" |April 23, 2019
|1140
|1335
|1536:96:46
| rowspan="2" |12
|288
|64.08
|128.2
|8202
|4101
|128.2<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-gtx-1660-ti-max-q.c3382|title=NVIDIA GeForce GTX 1660 Ti Max-Q Specs|website=TechPowerUp|language=en|access-date=2020-03-02}}</ref>
|60
|-
!GeForce GTX 1660 Ti (Laptop)<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-gtx-1660-ti-mobile.c3369|title=NVIDIA GeForce GTX 1660 Ti Mobile Specs|website=TechPowerUp|language=en|access-date=2020-02-04}}</ref>
|1455
|1590
|1536:96:46
|288
|76.32
|152.6
|9769
|4884
|152.6<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-gtx-1660-ti-mobile.c3369|title=NVIDIA GeForce GTX 1660 Ti Mobile Specs|website=TechPowerUp|language=en|access-date=2020-03-02}}</ref>
|80
|}
===GeForce 20 series===
{{Further|GeForce 20 series|Turing (microarchitecture)}}
* Supported [[Application programming interface|APIs]]: [[Direct3D]] 12 (12_2), [[OpenGL]] 4.6, [[OpenCL]] 3.0, [[Vulkan (API)|Vulkan]] 1.3 and [[CUDA]] 7.5, improve [[Nvidia NVENC|NVENC]] (Support B-Frame on [[High Efficiency Video Coding|H265]]...)
* MX Graphics lack [[NVENC]] and they are based on Pascal architecture.<ref>{{cite news|url=https://www.dobreprogramy.pl/nvidia-geforce-mx250-i-mx230-dwie-nowe-grafiki-do-laptopow,6628559174182529a|title=NVIDIA GeForce MX250 i MX230 – dwie "nowe" grafiki do laptopów|work=Dobre Programy|date=2019-02-21|language=pl|access-date=18 February 2022|archive-date=18 February 2022|archive-url=https://web.archive.org/web/20220218175646/https://www.dobreprogramy.pl/nvidia-geforce-mx250-i-mx230-dwie-nowe-grafiki-do-laptopow,6628559174182529a|url-status=live}}</ref>
*Add TensorCore and [[Ray tracing (graphics)|Ray tracing]] hardware acceleration, RTX IO (Only on RTX cards)
*Nvidia DLSS
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! scope="col" rowspan="2" | Model{{spaces}}name
! scope="col" rowspan="2" | Launch
! scope="col" rowspan="2" | [[Code name]]
! scope="col" rowspan="2" | Process
! scope="col" rowspan="2" | Transistors (billion)
! scope="col" rowspan="2" | Die size (mm<sup>2</sup>)
! rowspan="2" scope="col" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" scope="colgroup" | Clock speeds
! rowspan="2" scope="col" | Core config{{efn|name=CoreConfig}}
! rowspan="2" scope="col" | [[GPU cache|L2 Cache]] ([[Mebibyte|MiB]])
! scope="colgroup" colspan="4" | Memory
! scope="colgroup" colspan="2" | [[Fillrate]]{{efn|name=PerfValues}}
! colspan="4" |Processing power ([[GFLOPS]]){{efn|name=PerfValues}}
! scope="colgroup" colspan="2" | [[Ray tracing (graphics)|Ray-tracing]] Performance
! scope="col" rowspan="2" | [[Thermal design power|TDP]] (Watts)
|-
! scope="col" | Base core clock ([[Hertz|MHz]])
! scope="col" | Boost core clock ([[Hertz|MHz]])
! scope="col" | Memory ([[Transfers per second|GT/s]])
! scope="col" | Size ([[Gibibyte|GiB]])
! scope="col" | Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! scope="col" | Bus type
! scope="col" | Bus width ([[bit]])
! scope="col" | Pixel ([[Pixel|GP]]/s){{efn|name=PixelFillrate}}
! scope="col" | Texture ([[Texel (graphics)|GT]]/s){{efn|name=TextureFillrate}}
! scope="col" | [[Half precision floating-point format|Half precision]]
! scope="col" | [[Single precision floating-point format|Single precision]]
! scope="col" | [[Double precision floating-point format|Double precision]]
! scope="col" | [[Tensor]] compute (FP16)
! scope="col" | Rays/s (Billions)
! scope="col" | RTX OPS/s (Trillions)
|-
! scope="row" style="text-align:left;" | GeForce RTX 2050<ref>{{cite web|last=Smith|first=Ryan|title=NVIDIA Announces GeForce RTX 2050, MX570, and MX550 For Laptops: 2022's Entry Level GeForce|url=https://www.anandtech.com/show/17124/nvidia-announces-geforce-rtx-2050-mx570-and-mx550-for-laptops-2022s-entry-level-geforce|access-date=2021-12-18|website=www.anandtech.com|archive-date=24 February 2024|archive-url=https://web.archive.org/web/20240224182257/https://www3.anandtech.com/show/17124/nvidia-announces-geforce-rtx-2050-mx570-and-mx550-for-laptops-2022s-entry-level-geforce|url-status=dead}}</ref><ref>{{cite web|title=NVIDIA GeForce RTX 2050 Mobile Specs|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-2050-mobile.c3859|access-date=2021-12-18|website=TechPowerUp|language=en}}</ref>
|January 3, 2022
|GA107<br>(Ampere)
|[[Samsung]] [[10 nm process|8N]]
|8.7
|200
|PCIe 3.0 x8
|1155
|1477
| rowspan="2" |14
|2048:64: 32:64:32 <br>(16) (3)
|2
|4
|112.0
| rowspan="11" |[[GDDR6 SDRAM|GDDR6]]
|64
|
|
|
|
|
|
|
|
|30-45
|-
! scope="row" style="text-align:left;" | GeForce RTX 2060<ref>{{cite web|title=NVIDIA GeForce RTX 2060 Mobile Specs|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-2060-mobile.c3348|access-date=2019-09-17|website=TechPowerUp|language=en|archive-date=28 December 2019|archive-url=https://web.archive.org/web/20191228135429/https://www.techpowerup.com/gpu-specs/geforce-rtx-2060-mobile.c3348|url-status=live}}</ref>
| rowspan="4" | January 29,{{spaces}}2019
| rowspan="4" |TU106
| rowspan="10" |[[TSMC]]<br>[[Die shrink#Half-shrink|12FFN]]
| rowspan="4" |10.8
| rowspan="4" |445
| rowspan="10" |PCIe 3.0 x16
|960
|1200
| rowspan="2" |1920:120: 48:240:30 <br>(30) (3)
| rowspan="2" |3
| rowspan="2" |6
|336.0
| rowspan="2" |192
|57.6
|144
|9216
|4608
|144.0
|
|
|
|80
|-
! scope="row" style="text-align:left;" | GeForce RTX 2060 Max-Q<ref>{{cite web|title=NVIDIA GeForce RTX 2060 Max-Q Specs|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-2060-max-q.c3533|access-date=2020-08-31|website=TechPowerUp|language=en}}</ref>
|975
|1175
|11
|264.0
|56.88
|142.2
|9101
|4550
|142.2
|
|
|
|65
|-
! scope="row" style="text-align:left;" | GeForce RTX 2070<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-2070-mobile.c3349|title=NVIDIA GeForce RTX 2070 Mobile Specs|website=TechPowerUp|language=en|access-date=2019-09-17|archive-date=8 May 2020|archive-url=https://web.archive.org/web/20200508013416/https://www.techpowerup.com/gpu-specs/geforce-rtx-2070-mobile.c3349|url-status=live}}</ref>
|1215
|1440
|14
| rowspan="2" | 2304:144: 64:288:36 <br>(36) (3)
| rowspan="8" |4
| rowspan="8" |8
| rowspan="4" |448.0
| rowspan="8" |256
|92.16
|207.4
|13270
|6636
|207.4
|
|
|
|115
|-
! scope="row" style="text-align:left;" | GeForce RTX 2070 Max-Q<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-2070-max-q.c3392|title=NVIDIA GeForce RTX 2070 Max-Q Specs|website=TechPowerUp|language=en|access-date=2019-09-17|archive-date=14 July 2020|archive-url=https://web.archive.org/web/20200714202401/https://www.techpowerup.com/gpu-specs/geforce-rtx-2070-max-q.c3392|url-status=live}}</ref>
|885
|1185
|12
|75.84
|170.6
|10920
|5460
|170.6
|
|
|
|80
|-
! scope="row" style="text-align:left;" | GeForce RTX 2070 Super<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-2070-super-mobile.c3514|title=NVIDIA GeForce RTX 2070 SUPER Mobile Specs|website=TechPowerUp|language=en|access-date=2020-04-17}}</ref>
| rowspan="2" | April 2, 2020
| rowspan="6" |TU104
| rowspan="6" |13.6
| rowspan="6" |545
|1140
|1380
|14
| rowspan="2" | 2560:160: 64:320:40 <br>(40) (5)
|88.3
|220.8
|14130
|7066
|220.8
|
|
|
|115
|-
! scope="row" style="text-align:left;" | GeForce RTX 2070 Super Max-Q<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-2070-super-max-q.c3516|title=NVIDIA GeForce RTX 2070 SUPER Max-Q Specs|website=TechPowerUp|language=en|access-date=2020-04-03|archive-date=30 July 2020|archive-url=https://web.archive.org/web/20200730110816/https://www.techpowerup.com/gpu-specs/geforce-rtx-2070-super-max-q.c3516|url-status=live}}</ref>
|930
|1155
|12
|69.1
|172.8
|11060
|5530
|172.8
|
|
|
|80
|-
! scope="row" style="text-align:left;" | GeForce RTX 2080<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-2080-mobile.c3390|title=NVIDIA GeForce RTX 2080 Mobile Specs|website=TechPowerUp|language=en|access-date=2019-09-17|archive-date=19 July 2020|archive-url=https://web.archive.org/web/20200719183145/https://www.techpowerup.com/gpu-specs/geforce-rtx-2080-mobile.c3390|url-status=live}}</ref>
| rowspan="2" | January 29, 2019
|1380
|1590
|14
| rowspan="2" | 2944:184: 64:368:46 <br>(46) (6)
| rowspan="2" |384.0
|101.8
|292.6
| 18720
| 9362
| 292.6
|
|
|
|150
|-
! scope="row" style="text-align:left;" | GeForce RTX 2080 Max-Q<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-2080-max-q.c3363|title=NVIDIA GeForce RTX 2080 Max-Q Specs|website=TechPowerUp|language=en|access-date=2019-09-17|archive-date=18 July 2020|archive-url=https://web.archive.org/web/20200718083522/https://www.techpowerup.com/gpu-specs/geforce-rtx-2080-max-q.c3363|url-status=live}}</ref>
|735
|1095
|12
|70.08
|201.5
|12890
|6447
|201.5
|
|
|
|80
|-
! scope="row" style="text-align:left;" | GeForce RTX 2080 Super<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-2080-super-mobile.c3513|title=NVIDIA GeForce RTX 2080 SUPER Mobile Specs|website=TechPowerUp|language=en|access-date=2020-04-17}}</ref>
| rowspan="2" | April 2, 2020
|1365
|1560
|14
| rowspan="2" |3072:192: 64:384:48 <br>(48) (6)
|448.0
|99.8
|299.5
|19170
|9585
|299.5
|
|
|
|150
|-
! scope="row" style="text-align:left;" | GeForce RTX 2080 Super Max-Q<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-2080-super-max-q.c3515|title=NVIDIA GeForce RTX 2080 SUPER Max-Q Specs|website=TechPowerUp|language=en|access-date=2020-04-03|archive-date=4 April 2020|archive-url=https://web.archive.org/web/20200404135253/https://www.techpowerup.com/gpu-specs/geforce-rtx-2080-super-max-q.c3515|url-status=live}}</ref>
|735
|1080
|11
|352.0
|69.1
|207.4
|13270
|6636
|207.4
|
|
|
|80
|-
|}
{{notelist|refs=
{{efn|name=CoreConfig|Main [[Unified shader model|Shader Processors]] ''':''' [[texture mapping unit|Texture Mapping Units]] ''':''' [[render output unit|Render Output Units]] ''':''' [[Tensor]] Cores (or FP16 Cores in GeForce 16 series) ''':''' [[Ray tracing (graphics)|Ray-tracing]] Cores (Streaming Multiprocessors) (Graphics Processing Clusters)}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.}}
{{efn|name=PerfValues|Base clock, Boost clock}}
}}
===GeForce 30 series===
{{Further|GeForce 30 series|Ampere (microarchitecture)}}
* Supported [[Application programming interface|APIs]]: [[Direct3D]] 12 Ultimate (12_2), [[OpenGL]] 4.6, [[OpenCL]] 3.0, [[Vulkan (API)|Vulkan]] 1.3<ref name="vulkandrv" /> and [[CUDA]] 8.6
*[[Tensor Core|Tensor core]] 3rd gen
*[[RT core]] 2nd gen
*RTX IO
*Improve [[Nvidia NVDEC|NVDEC]] (Add AV1)
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan="2" scope="col" | Model{{spaces}}name
! rowspan="2" scope="col" | Launch
! rowspan="2" scope="col" | [[Code name]]
! rowspan="2" scope="col" | Process
! rowspan="2" scope="col" | Transistors (billion)
! rowspan="2" scope="col" | Die size (mm<sup>2</sup>)
! rowspan="2" scope="col" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" scope="colgroup" | Clock speeds{{efn|name=ClockSpeed|Which base and boost core clockspeeds the GPU has depends on the TDP configuration set by the system builder}}
! rowspan="2" scope="col" | Core config<br>{{efn|name=CoreConfig}}
! rowspan="2" scope="col" | [[GPU cache|L2 Cache]] ([[Mebibyte|MiB]])
! colspan="4" scope="colgroup" | Memory
! colspan="2" scope="colgroup" | [[Fillrate]]{{efn|name=PerfValues}}
! colspan="5" |Processing power ([[TFLOPS]]){{efn|name=PerfValues}}
! colspan="2" scope="colgroup" | [[Ray tracing (graphics)|Ray-tracing]] Performance
! rowspan="2" scope="col" | [[Thermal design power|TDP]] (Watts)
|-
! scope="col" | Base core ([[Hertz|MHz]])
! scope="col" | Boost core ([[Hertz|MHz]])
! scope="col" | Memory ([[Hertz|MHz]])<br />([[Data-rate units|Gb/s]])<br/>([[Transfers per second|GT/s]])
! scope="col" | Size ([[Gibibyte|GiB]])
! scope="col" | Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! scope="col" | Bus type
! scope="col" | Bus width ([[bit]])
! scope="col" | Pixel ([[Pixel|GP]]/s){{efn|name=PixelFillrate}}
! scope="col" | Texture ([[Texel (graphics)|GT]]/s){{efn|name=TextureFillrate}}
! scope="col" | [[Half precision floating-point format|Half precision]]
! scope="col" | [[Single precision floating-point format|Single precision]]
! scope="col" | [[Double precision floating-point format|Double precision]]
! scope="col" | [[Tensor]] compute (FP16)
! scope="col" | [[Tensor]] TOPS (INT8)
! scope="col" | Rays/s (Billions)
! scope="col" | RTX OPS/s (Trillions)
|-
! scope="row" style="text-align:left;" | GeForce RTX 3050<br>Mobile/<ref>{{cite news|last=Hinum|first=Klaus|title=NVIDIA GeForce RTX 3050 Mobile GPU - Benchmarks and Specs|url=https://www.notebookcheck.net/NVIDIA-GeForce-RTX-3050-Mobile-GPU-Benchmarks-and-Specs.513790.0.html|access-date=2021-01-17|website=Notebookcheck|language=en|archive-date=22 January 2021|archive-url=https://web.archive.org/web/20210122045648/https://www.notebookcheck.net/NVIDIA-GeForce-RTX-3050-Mobile-GPU-Benchmarks-and-Specs.513790.0.html|url-status=live}}</ref><br/>Laptop<ref>{{cite web|title=NVIDIA GeForce RTX 3050 Mobile Specs - TechPowerUp GPU Database|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-3050-mobile.c3788|access-date=2021-04-17|website=TechPowerUp|language=en}}</ref>
| rowspan="2" | May 11, 2021
| rowspan="2" | GA107
| rowspan="7" |[[Samsung]] 8N
| rowspan="2" | 8.7
| rowspan="2" | 200
| rowspan="2" |[[PCIe 4.0|PCIe 4.0 x8]]
| 712–1530<br />622-1237
| 1057–1740<br />990-1492
| 1375–1500<br />11–12<br />11–12<br />1375–1750<br />11–14<br />11–14
| 2048:64: 32:64:16<br>(16) (3)<br>2560:80: 32:80:20<br>(20) (?)
| rowspan="2" | 2
| 4<br/>6
| 176.0-192.0<br/>132.0-224.0
| rowspan="7" | GDDR6
| 128<br/>96
| 22.7-48.9<br/>33.8-55.6<br/>
| 45.6-97.9<br/>67.7-111.4
| 2.92-6.27<br />4.33-7.13
| 2.92-6.27<br/>4.33-7.13
| 0.046-0.098<br/>0.068-0.111
|
|
|
|
| rowspan="2" | 35–80
|-
! scope="row" style="text-align:left;" | GeForce<ref>{{cite news|last=Hinum|first=Klaus|title=NVIDIA GeForce RTX 3050 Ti Mobile GPU - Benchmarks and Specs|url=https://www.notebookcheck.net/NVIDIA-GeForce-RTX-3050-Ti-Laptop-GPU-GPU-Benchmarks-and-Specs.527430.0.html|access-date=2021-04-17|website=Notebookcheck|language=en}}</ref><br/>RTX 3050 <br/>Ti Mobile/<br/>Laptop<ref>{{cite web|title=NVIDIA GeForce RTX 3050 Ti Mobile Specs - TechPowerUp GPU Database|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-3050-ti-mobile.c3778|access-date=2021-04-17|website=TechPowerUp|language=en}}</ref>
| 735–1462
| 1035–1695
| 12
| 2560:80: 48:80:20<br>(20) (3)
| 4
| 192
|128
| 35.3-70.2<br/>49.7-81.4
| 58.8-117.0<br/>82.8-135.6
| 3.76-7.49<br />5.30-8.68
| 3.76-7.49<br/>5.30-8.68
| 0.059-0.117<br/>0.083-0.136
|
|
|
|-
! scope="row" style="text-align:left;" | GeForce RTX 3060 Mobile/<ref>{{cite news|last=Hinum|first=Klaus|title=NVIDIA GeForce RTX 3060 Mobile GPU - Benchmarks and Specs|url=https://www.notebookcheck.net/NVIDIA-GeForce-RTX-3060-Mobile-GPU-Benchmarks-and-Specs.497453.0.html|access-date=2021-01-17|website=Notebookcheck|language=en|archive-date=26 January 2021|archive-url=https://web.archive.org/web/20210126163550/https://www.notebookcheck.net/NVIDIA-GeForce-RTX-3060-Mobile-GPU-Benchmarks-and-Specs.497453.0.html|url-status=live}}</ref><br/>Laptop<ref>{{cite web|title=NVIDIA GeForce RTX 3060 Mobile Specs - TechPowerUp GPU Database|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-3060-mobile.c3757|access-date=2021-04-17|website=TechPowerUp|language=en}}</ref>
| rowspan=2 | January 12, 2021
| GA106
| 12.0
| 276
| rowspan="5" |[[PCIe 4.0|PCIe 4.0 x16]]
| 817–1387
| 1282–1702
| rowspan="4" | 12<br />14
| 3840:120: 48:120:30<br>(30) (3)
| 3
| 6
| 288<br/>336
| 192
| 39.2-66.6<br/>61.54-81.7
| 98.0-166.4<br/>153.8-204.2
| 6.27-10.65<br />9.85-13.07
| 6.27-10.65<br/>9.85-13.07
| 0.108-0.166<br/>0.154-0.204
|
|
|
|
|60-115
|-
!scope="row" style="text-align:left;" |GeForce<ref>{{cite web|last=Hinum|first=Klaus|title=NVIDIA GeForce RTX 3070 Mobile GPU - Benchmarks and Specs|url=https://www.notebookcheck.net/NVIDIA-GeForce-RTX-3070-Mobile-GPU-Benchmarks-and-Specs.497451.0.html|access-date=2021-01-17|website=Notebookcheck|language=en|archive-date=19 January 2021|archive-url=https://web.archive.org/web/20210119050351/https://www.notebookcheck.net/NVIDIA-GeForce-RTX-3070-Mobile-GPU-Benchmarks-and-Specs.497451.0.html|url-status=live}}</ref><br/>RTX 3070 Mobile/<ref>{{cite web|title=NVIDIA GeForce RTX 3070 Mobile Specs|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-3070-mobile.c3712|access-date=2021-01-18|website=TechPowerUp|language=en}}</ref><br/>Laptop<ref>{{cite web|last=Herter|first=Marc|title=Lenovo Legion 5 Pro 16 review: A gaming laptop with a bright 165-Hz display|url=https://www.notebookcheck.net/Lenovo-Legion-5-Pro-16-review-A-gaming-laptop-with-a-bright-165-Hz-display.554931.0.html|access-date=2021-09-13|website=Notebookcheck|date=13 August 2021|language=en|archive-date=24 February 2024|archive-url=https://web.archive.org/web/20240224005105/https://www.notebookcheck.net/Lenovo-Legion-5-Pro-16-review-A-gaming-laptop-with-a-bright-165-Hz-display.554931.0.html|url-status=live}}</ref>
| GA104-770-A1
| rowspan="3" | 17.4
| rowspan="3" | 392
| 780–1215
| 1290–1720
| 5120:160: 80:160:40<br>(40) (6)
| rowspan="4" |4
| rowspan="2" | 8
| rowspan="3" | 384<br/>448
| rowspan="4" | 256
| 62.4-97.2<br/>103.2-129.6
| 124.8-194.4<br/>206.4-259.2
| 7.99-12.44<br />13.21-16.59
| 7.99-12.44<br/>13.21-16.59
| 0.125-0.194<br/>0.206-0.259
|
|
|
|
| rowspan=2 | 80–125
|-
! style="text-align:left;" |GeForce RTX 3070<br/>Ti Mobile/<br/>Laptop<ref>{{Cite web |title=NVIDIA GeForce RTX 3070 Ti Mobile Specs {{!}} TechPowerUp GPU Database |url=https://www.techpowerup.com/gpu-specs/geforce-rtx-3070-ti-mobile.c3852 |access-date=2022-09-24 |website=TechPowerUp}}</ref>
|January 4, 2022
|GA104
|510-1035
|1035-1485
|5888:184: 96:184:46 <br>(46) (6)
|46.9-95.2<br/>95.2-136.6
|93.8-190.4<br/>190.4-273.2
|6.01-12.19<br />12.19-17.49
|6.01-12.19<br/>12.19-17.49
|0.094-0.190<br/>0.190-0.273
|16.6
|
|
|-
! scope="row" style="text-align:left;" |GeForce RTX 3080 Mobile/<ref>{{cite web|title=NVIDIA GeForce RTX 3080 Mobile Specs|url=https://www.techpowerup.com/gpu-specs/geforce-rtx-3080-mobile.c3684|access-date=2021-01-17|website=TechPowerUp|language=en}}</ref><br/>Laptop<ref>{{cite web|last=Hinum|first=Klaus|title=NVIDIA GeForce RTX 3080 Mobile GPU - Benchmarks and Specs|url=https://www.notebookcheck.net/NVIDIA-GeForce-RTX-3080-Mobile-GPU-Benchmarks-and-Specs.497450.0.html|access-date=2021-01-17|website=Notebookcheck|language=en|archive-date=21 January 2021|archive-url=https://web.archive.org/web/20210121004856/https://www.notebookcheck.net/NVIDIA-GeForce-RTX-3080-Mobile-GPU-Benchmarks-and-Specs.497450.0.html|url-status=live}}</ref>
|January 12, 2021
| GA104-775-A1
|780-1350
|1245-1810
|6144:192: 96:192:48<br>(48) (6)
|8-16
|74.9-129.6<br/>119.5-164.2
|149.8-259.2<br/>239.0-328.3
|9.59-16.59<br />15.30-22.2
|9.59-16.59<br/>15.30-22.2
|0.150-0.259<br/>0.239-0.328
|
|
|
|
| rowspan=2 | 80–150
|-
! style="text-align:left;" |GeForce RTX 3080<br/>Ti Mobile/<br/>Laptop<ref>{{Cite web |title=NVIDIA GeForce RTX 3080 Ti Mobile Specs {{!}} TechPowerUp GPU Database |url=https://www.techpowerup.com/gpu-specs/geforce-rtx-3080-ti-mobile.c3840 |access-date=2022-09-24 |website=TechPowerUp}}</ref>
|January 25,{{spaces}}2022
|GA103
|22
|496
|585-1230
|1125-1590
|12<br>16
|7424:232: 116:232:58 <br>(58) (6)
|16
|384<br/>512
|67.9-142.7<br/>{{nbsp}}130.5-184.4{{nbsp}}
|135.7-285.4<br/>{{nbsp}}261.0-368.9{{nbsp}}
|8.68-18.26<br />{{nbsp}}{{nbsp}}16.7-23.60{{nbsp}}{{nbsp}}
|8.68-18.26<br/>{{nbsp}}{{nbsp}}16.7-23.60{{nbsp}}{{nbsp}}
|0.136-0.285<br/>{{nbsp}}{{nbsp}}0.261-0.369{{nbsp}}{{nbsp}}
|18.71
|
|
|}
{{notelist|refs=
{{efn|name=CoreConfig|Main [[Unified shader model|Shader Processors]] ''':''' [[texture mapping unit|Texture Mapping Units]] ''':''' [[render output unit|Render Output Units]] ''':''' [[Tensor]] Cores (or FP16 Cores in GeForce 16 series) ''':''' [[Ray tracing (graphics)|Ray-tracing]] Cores (Streaming Multiprocessors) (Graphics Processing Clusters)}}
{{efn|name=ClockSpeed|Which base and boost core clockspeeds the GPU has depends on the TDP configuration set by the system builder}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.}}
{{efn|name=PerfValues|Base clock, Boost clock.}}
}}
=== GeForce 40 series ===
{{Further|GeForce 40 series|Ada Lovelace (microarchitecture)}}
*Supported [[Application programming interface|APIs]]: [[Direct3D]] 12 Ultimate (12_2), [[OpenGL]] 4.6, [[OpenCL]] 3.0, [[Vulkan (API)|Vulkan]] 1.3<ref name="vulkandrv" /> and [[CUDA]] 8.9
*[[Tensor Core|Tensor core]] 4th gen
*[[RT core]] 3rd gen
*DLSS 3 (Super Resolution + Frame Generation)<ref>{{Cite web |title=Introducing NVIDIA DLSS 3 |url=https://www.nvidia.com/en-us/geforce/news/dlss3-ai-powered-neural-graphics-innovations/ |access-date=2023-01-04 |website=NVIDIA |language=en-us |archive-date=23 May 2024 |archive-url=https://web.archive.org/web/20240523145811/https://www.nvidia.com/en-us/geforce/news/dlss3-ai-powered-neural-graphics-innovations/ |url-status=live }}</ref>
*SER<ref>{{Cite web |date=2022-10-13 |title=Improve Shader Performance and In-Game Frame Rates with Shader Execution Reordering |url=https://developer.nvidia.com/blog/improve-shader-performance-and-in-game-frame-rates-with-shader-execution-reordering/ |access-date=2023-01-04 |website=NVIDIA Technical Blog |language=en-US |archive-date=25 May 2023 |archive-url=https://web.archive.org/web/20230525025659/https://developer.nvidia.com/blog/improve-shader-performance-and-in-game-frame-rates-with-shader-execution-reordering/ |url-status=live }}</ref>
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan="2" scope="col" | Model{{spaces}}name
! rowspan="2" scope="col" | Launch
! rowspan="2" scope="col" | [[Code name]]
! rowspan="2" scope="col" | Process
! rowspan="2" scope="col" | Transistors (billion)
! rowspan="2" scope="col" | Die size (mm<sup>2</sup>)
! rowspan="2" scope="col" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" scope="colgroup" | Clock speeds{{efn|name=ClockSpeed|Which base and boost core clockspeeds the GPU has depends on the TDP configuration set by the system builder}}
! rowspan="2" scope="col" | Core config{{efn|name=CoreConfig}}
! rowspan="2" scope="col" | [[GPU cache|L2 Cache]] ([[Mebibyte|MiB]])
! colspan="4" scope="colgroup" | Memory
! colspan="2" scope="colgroup" | [[Fillrate]]{{efn|name=PerfValues}}
! colspan="5" |Processing power ([[TFLOPS]]){{efn|name=PerfValues}}
! colspan="2" scope="colgroup" | [[Ray tracing (graphics)|Ray-tracing]] Performance
! rowspan="2" scope="col" | [[Thermal design power|TDP]] (Watts)
|-
! scope="col" | Base core ([[Hertz|MHz]])
! scope="col" | Boost core ([[Hertz|MHz]])
! scope="col" | Memory ([[Hertz|MHz]])<br>([[Data-rate units|Gb/s]])<br>([[Transfers per second|GT/s]])
! scope="col" | Size ([[Gibibyte|GiB]])
! scope="col" | Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! scope="col" | Bus type
! scope="col" | Bus width ([[bit]])
! scope="col" | Pixel ([[Pixel|GP]]/s){{efn|name=PixelFillrate}}
! scope="col" | Texture ([[Texel (graphics)|GT]]/s){{efn|name=TextureFillrate}}
! scope="col" | [[Half precision floating-point format|Half precision]]
! scope="col" | [[Single precision floating-point format|Single precision]]
! scope="col" | [[Double precision floating-point format|Double precision]]
! scope="col" | [[Tensor]] compute (FP16)
! scope="col" | [[Tensor]] TOPS (INT8)
! scope="col" | Rays/s (Billions)
! scope="col" | RTX OPS/s (Trillions)
|-
! scope="row" style="text-align:left;" | GeForce RTX 4050 Mobile/<br/>Laptop<ref name=":1">{{Cite web |title=GeForce RTX 40 Series Laptops: NVIDIA Ada Lovelace Breaks Energy-Efficiency Barrier, Supercharges 170+ Laptop Designs For Gamers & Creators |url=https://www.nvidia.com/en-us/geforce/news/geforce-rtx-40-series-laptops-available-february-8/ |access-date=2023-01-04 |website=NVIDIA |language=en-us |archive-date=21 March 2023 |archive-url=https://web.archive.org/web/20230321071930/https://www.nvidia.com/en-us/geforce/news/geforce-rtx-40-series-laptops-available-february-8/ |url-status=live }}</ref>
|rowspan="3" | February 22, 2023
|AD107<br/>GN21-X2
|rowspan="5" |[[TSMC]] [[5 nm process|4N]]
|rowspan="2" | 18.9
|rowspan="2" | 146
| rowspan="3" |[[PCIe 4.0|PCIe 4.0 x8]]
|1140-2370
|1605-2370
| rowspan="3" |2000<br />16<br />14000 (Max-Q)<br />16
|2560:80:<br>32:80:20<br>(20) (2)
|12
|6
|168.0<br/>192.0
|rowspan="5" | [[GDDR6]]<br/><ref>{{Cite web |date=2023-01-03 |title=NVIDIA announces GeForce RTX 40 Laptop GPU series, RTX 4090 with 9728 CUDAs and 16GB GDDR6 memory |url=https://videocardz.com/newz/nvidia-announces-geforce-rtx-40-laptop-gpu-series-rtx-4090-with-9728-cudas-and-16gb-gddr6-memory |access-date=2023-02-24 |website=videocardz |language=en-US |archive-date=4 June 2023 |archive-url=https://web.archive.org/web/20230604205659/https://videocardz.com/newz/nvidia-announces-geforce-rtx-40-laptop-gpu-series-rtx-4090-with-9728-cudas-and-16gb-gddr6-memory |url-status=live }}</ref>
|96
|36.4-75.8<br />51.3-75.8
|91.2-189.6<br />128.4-189.6
|5.83-12.1<br />8.21-12.1
|5.83-12.1<br />8.21-12.1
|0.09-0.18<br />0.12-0.18
|46.6-97.0<br />65.7-97.0
|93.3-194.1<br />131.5-194.1
|
|
|rowspan="3" | 35–115
|-
!scope="row" style="text-align:left;" | GeForce RTX 4060 Mobile/<br/>Laptop<ref name=":1" />
|AD107<br/>GN21-X4
|1140-2295
|1470-2370
|3072:96:<br>32:96:24<br>(24) (2)
| rowspan="2" | 32
|rowspan="2" | 8
|rowspan="2" | 224.0<br/>256.0
|rowspan="2" | 128
|36.4-73.4<br/>47.0-75.8
|109.4-220.3<br />141.1-227.5
|7.00-14.1<br />9.03-14.5
|7.00-14.1<br />9.03-14.5
|0.10-0.22<br />0.14-0.23
|56.0-112.8<br />72.2-116.4
|112.0-225.6<br />144.5-232.9
|
|
|-
!scope="row" style="text-align:left;" |GeForce RTX 4070 Mobile/<br/>Laptop<ref name=":1" />
|AD106<br/>GN21-X6
|22.9
|190
|735-2070
|1230-2175
|4608:144:<br>48:144:36<br>(36) (3)
|35.2-99.3<br/>59.0-104.4
|105.8-298.0<br/>177.1-313.2
|6.77-19.0<br />11.3-20.0
|6.77-19.0<br/>11.3-20.0
|0.10-0.29<br/>0.17-0.31
|54.1-152.6<br/>90.6-160.3
|108.3-305.2<br/>181.3-320.7
|
|
|-
!style="text-align:left;" |GeForce RTX 4080 Mobile/<br/>Laptop<ref name=":1" />
|rowspan="2" | February 8, 2023
|AD104<br/>GN21-X9
|35.8
|295
| rowspan="2" |[[PCIe 4.0|PCIe 4.0 x16]]
|795-1860
|1350-2280
| rowspan="2" |2250<br />18<br />14000 <br>(Max-Q)<br />18
|7424:232:<br>80:232:58<br>(58) (5)
|48
|12
|336.0<br/>432.0
|192
|63.6-148.8 108.0-182.4
|184.4-431.5 313.2-528.9
|11.8-27.6 20.0-33.8
|11.8-27.6 20.0-33.8
|0.18-0.43 0.31-0.52
|94.4-220.9<br/>160.3-270.8
|188.8-441.8<br/>320.7-541.6
|
|
|60-150
|-
!scope="row" style="text-align:left;" |GeForce RTX 4090 Mobile/<br/>Laptop<ref name=":1" />
|AD103<br/>GN21-X11
|45.9
|379
|930-1620
|1455-2580
|9728:304:<br>112:304:76<br>(76) (7)
|64
|16
|448.0<br/>576.0
|256
|104.1-181.4<br/>162.9-288.9
|282.7-492.4<br/>442.3-784.3
|18.0-31.5<br />28.3-50.1
|18.0-31.5<br/>28.3-50.1
|0.28-0.49<br/>0.44-0.78
|144.8-252.1<br/>226.4-401.5
|289.5-504.2<br/>452.9-803.1
|
|
|80-150
|-
|}
{{notelist|refs=
{{efn|name=CoreConfig| Main [[Unified shader model|Shader Processors]] ''':''' [[texture mapping unit|Texture Mapping Units]] ''':''' [[render output unit|Render Output Units]] ''':''' [[Tensor]] Cores ''':''' [[Ray tracing (graphics)|Ray-tracing]] Cores (Streaming Multiprocessors) (Graphics Processing Clusters)}}
{{efn|name=PerfValues|Base clock, Boost clock.}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the clock rate.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the core clock speed.}}
<!-- unused
{{efn|name=Sparsity|Numbers use sparsity feature to potentially double performance (by skipping up to half of zero multiplies in a matrix).}}
-->
}}
===GeForce 50 series===
{{Further|GeForce RTX 50 series|Blackwell (microarchitecture)}}
Laptops featuring GeForce 50 series laptop GPUs were shown at CES 2025. Laptops with RTX 50 series GPUs were paired with [[Intel]]'s [[Arrow Lake (microprocessor)|Arrow Lake-HX]] and [[Advanced Micro Devices|AMD]]'s Strix Point and Fire Range CPUs.<ref>{{cite web |last1=Khan |first1=Safraz |date=January 7, 2025 |title=MSI Launches GeForce RTX 50-Series Gaming Laptops: RTX 5090-Powered Titan 18 HX Dragon Edition Aimed For Elite Performance |url=https://wccftech.com/msi-geforce-rtx-50-series-gaming-laptops/ |website=Wccftech |language=en-US |access-date=January 9, 2025}}</ref><ref>{{cite web |title=Gigabyte Shows Off Custom GeForce RTX 50 Series Designs for Desktop, and Blackwell-Powered Laptops |url=https://www.techpowerup.com/330688/gigabyte-shows-off-custom-geforce-rtx-50-series-designs-for-desktop-and-blackwell-powered-laptops |website=TechPowerUp |language=en-US |date=January 7, 2025 |access-date=January 9, 2025}}</ref> Nvidia claims that Blackwell architecture's new Max-Q features can increase battery life by up to 40% over GeForce 40 series laptops.<ref>{{cite web |last1=Burnes |first1=Andrew |date=January 6, 2025 |title=New GeForce RTX 50 Series Graphics Cards & Laptops Powered By Nvidia Blackwell Bring Game-Changing AI and Neural Rendering Capabilities To Gamers and Creators |url=https://www.nvidia.com/en-us/geforce/news/rtx-50-series-graphics-cards-gpu-laptop-announcements/ |website=Nvidia |language=en-US |access-date=January 9, 2025}}</ref> For example, Advanced Power Gating saves power by turning off areas of the GPU that are unused and the paired GDDR7 memory can run in an "ultra" low-voltage state.<ref>{{cite web |last1=Nasir |first1=Hassam |date=January 7, 2025 |title=Nvidia introduces RTX 5090, RTX 5080, and RTX 5070 laptop GPUs — RTX 50 Blackwell goes mobile with up to 24GB of GDDR7 memory |url=https://www.tomshardware.com/pc-components/gpus/nvidia-introduces-rtx-5090-rtx-5080-and-rtx-5070-laptop-gpus-rtx-50-blackwell-goes-mobile-with-up-to-24gb-of-gddr7-memory |website=Tom's Hardware |language=en-US |access-date=January 9, 2025}}</ref> Initial RTX 50 series laptops will become available in March 2025 starting at $1,299.<ref>{{cite web |last1=Osborne |first1=Joe |date=January 7, 2025 |title=Nvidia GeForce RTX 50-Series Mobile GPUs Bring AI-Based Rocket Fuel to Gaming Laptops This Spring |url=https://uk.pcmag.com/graphics-cards/156138/nvidia-50-series-mobile-gpus-bring-ai-based-rocket-fuel-to-gaming-laptops-this-spring |website=PCMag UK |language=en-GB |access-date=January 9, 2025}}</ref>
{{sticky header}}
{| class="wikitable sticky-header" style="text-align:center; white-space:nowrap;"
! colspan="2" | GeForce RTX
! 5050<br />Laptop<ref name="spec_of_50_series">{{Cite web |date=2025-06-26 |title=NVIDIA GeForce RTX 5000 Mobile Series Specs |url=https://www.nvidia.com/en-eu/geforce/laptops/50-series/ |access-date=2025-06-26 |website=nvidia |language=en}}</ref><ref>{{Cite web |date=2025-06-26 |title=NVIDIA GeForce RTX 5050 Mobile Specs |url=https://www.techpowerup.com/gpu-specs/geforce-rtx-5050-mobile.c4239 |access-date=2025-06-26 |website=TechPowerUp |language=en}}</ref>
! 5060<br />Laptop<ref name="spec_of_50_series"/><ref>{{Cite web |date=2025-04-21 |title=NVIDIA GeForce RTX 5060 Mobile Specs |url=https://www.techpowerup.com/gpu-specs/geforce-rtx-5060-mobile.c4230 |access-date=2025-04-21 |website=TechPowerUp |language=en}}</ref>
! 5070<br />Laptop<ref name="spec_of_50_series"/><ref>{{Cite web |date=2025-01-11 |title=NVIDIA GeForce RTX 5070 Mobile Specs |url=https://www.techpowerup.com/gpu-specs/geforce-rtx-5070-mobile.c4237 |access-date=2025-01-11 |website=TechPowerUp |language=en}}</ref>
! 5070 Ti<br />Laptop<ref name="spec_of_50_series"/><ref>{{Cite web |date=2025-01-11 |title=NVIDIA GeForce RTX 5070 Ti Mobile Specs |url=https://www.techpowerup.com/gpu-specs/geforce-rtx-5070-ti-mobile.c4238 |access-date=2025-01-11 |website=TechPowerUp |language=en}}</ref>
! 5080<br />Laptop<ref name="spec_of_50_series"/><ref>{{Cite web |date=2025-01-11 |title=NVIDIA GeForce RTX 5080 Mobile Specs |url=https://www.techpowerup.com/gpu-specs/geforce-rtx-5080-mobile.c4236 |access-date=2025-01-11 |website=TechPowerUp |language=en}}</ref>
! 5090<br />Laptop<ref name="spec_of_50_series"/><ref>{{Cite web |date=2025-01-11 |title=NVIDIA GeForce RTX 5090 Mobile Specs |url=https://www.techpowerup.com/gpu-specs/geforce-rtx-5090-mobile.c4235 |access-date=2025-01-11 |website=TechPowerUp |language=en}}</ref><ref>{{Cite web |title=NVIDIA GeForce RTX 5090 - Specifications and Benchmark - SocSpecifications |url=https://socspecifications.com/product/nvidia-geforce-rtx-5090-specifications-and-benchmark/ |access-date=2025-01-31 |language=en-US}}</ref>
|-
! colspan="2" | Release date
| Jun 2025
| May 2025
| Apr 2025
| colspan="3" | Mar 2025
|-
! colspan="2" | GPU [[Die (integrated circuit)|die]]
| GB207
| GB206
| GB206-300
| GB205-200
| GB203-400
| GB203-400
|-
! colspan="2" | Transistors <small>(billion)</small>
| 16.9
| colspan="2"| 21.9
| 31.1
| colspan="2" | 45.6
|-
! colspan="2" | Die size
| 149 mm<sup>2</sup>
| colspan="2" | 181 mm<sup>2</sup>
| 263 mm<sup>2</sup>
| colspan="2" | 378 mm<sup>2</sup>
|-
! rowspan="6" | Core
! [[Unified shader model|CUDA cores]]
| 2,560
| 3,328
| 4,608
| 5,888
| 7,680
| 10,496
|-
! [[Texture mapping unit]]
| 80
| 104
| 144
| 184
| 240
| 328
|-
! [[Render output unit]]
| 32
| 48
| 48
| 64
| 96
| 112
|-
! [[Nvidia RTX|Ray tracing cores]]
| 20
| 26
| 36
| 46
| 60
| 82
|-
! [[Tensor Core|Tensor cores]]
| 80
| 104
| 144
| 184
| 240
| 328
|-
! Clock speed <small>([[Hertz|GHz]])</small><br />''Boost value <small>([[Hertz|GHz]])</small>''
| 1500<br />''2662''
| 1455<br />''2497''
| 1425<br />''2347''
| 1447<br />''2220''
| 1500<br />''2287''
| 1597<br />''2160''
|-
! colspan="2" | [[Streaming Multiprocessor|Streaming multiprocessors]]
| 20
| 26
| 36
| 46
| 60
| 82
|-
! rowspan="2" | [[Cache (computing)#GPU cache|Cache]]
! L1
| 2.5 MB
| 3.25 MB
| 4.5 MB
| 5.75 MB
| 7.5 MB
| 10.25 MB
|-
! L2
| colspan="3" | 32 MB
| 48 MB
| colspan="2" | 64 MB
|-
! rowspan="5" | [[Video random access memory|Memory]]
! Type
| colspan="6" |[[GDDR7 SDRAM|GDDR7]]
|-
! Size
| colspan="3" | 8 GB
| 12 GB
| 16 GB
| 24 GB
|-
! Clock <small>([[Data-rate units|Gb]]/s)</small>
| colspan="3" | 24
| colspan="3" | 28
|-
! Bandwidth <small>([[Gigabyte|GB]]/s)</small>
| colspan="3" | 384
| 672
| colspan="2" | 896
|-
! Bus width
| colspan="3" |128-bit
| 192-bit
| colspan="2" | 256-bit
|-
! rowspan="2" | [[Fillrate]]
! Pixel <small>([[Pixel|Gpx]]/s)</small>{{efn|name="pixel fillrate"}}
| 85.2
| 119.9
| 112.7
| 142.1
| 219.6
| 241.9
|-
! Texture <small>([[Texel (graphics)|Gtex]]/s)</small>{{efn|name="texture fillrate"}}
| 213.0
| 259.7
| 338.0
| 408.5
| 548.9
| 708.5
|-
! rowspan="3" | Processing<br/>power<br/><small>([[FLOPS|TFLOPS]])</small>
! [[Half-precision floating-point format|FP16]]/[[Single-precision floating-point format|FP32]]
| 13.63
| 16.62
| 21.63
| 26.14
| 35.13
| 45.34
|-
! [[Double-precision floating-point format|FP64]]
| 0.213
| 0.260
| 0.338
| 0.408
| 0.549
| 0.708
|-
! [[Tensor]] [[Half-precision floating-point format|compute]]<br/>[sparse]
|
|
|
|
|
|
|-
! rowspan=1 | Interface
! Host
| colspan="6" |[[PCI Express#PCI Express 5.0|PCIe 5.0]] x16
|-
! colspan="2" | [[Thermal design power|TDP]]<ref>{{cite web |last1=n/a |first1=WhyCry |date=July 27, 2025 |title=ASUS shares RTX 50 TGP specs for entire ROG Strix/TUF gaming laptop series |url=https://videocardz.com/newz/asus-shares-rtx-50-tgp-specs-for-entire-rog-strix-tuf-gaming-laptop-series |website=Videocardz |language=en-US |access-date=July 28, 2025}}</ref>
| 50-100 W
| 45-100 W
| 50-100 W
| 60-115 W
| 80-150 W
| 95-150 W
|}
{{notelist|refs=
{{efn|name="pixel fillrate"|Pixel fillrate is calculated as the number of render output units (ROPs) multiplied by the base (or boost) core clock speed.}}
{{efn|name="texture fillrate"|Texture fillrate is calculated as the number of texture mapping units (TMUs) multiplied by the base (or boost) core clock speed.}}
}}
===GeForce MX series===
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! scope="col" rowspan="2" | Model name<br>([[Microarchitecture|Architecture]])
! scope="col" rowspan="2" | Launch
! scope="col" rowspan="2" | [[Code name]](s)
! scope="col" rowspan="2" | Process
! scope="col" rowspan="2" | Transistors (billion)
! scope="col" rowspan="2" | Die{{nbsp}}size ([[Millimeter Squared|mm<sup>2</sup>]])
! rowspan="2" scope="col" | [[Computer bus|Bus]] [[I/O interface|interface]]
! colspan="3" scope="colgroup" | Clock speeds
! rowspan="2" scope="col" | [[GPU cache|L2 Cache]] ([[Mebibyte|MiB]])
! rowspan="2" scope="col" | Core config {{efn|name=CoreConfig}}
! scope="colgroup" colspan="4" | [[Video Random Access Memory|Memory]]
! scope="colgroup" colspan="2" | [[Fillrate]]{{efn|name=PerfValues}}
! colspan="4" |Processing{{nbsp}}power{{nbsp}}([[GFLOPS]]){{efn|name=PerfValues}}
! scope="colgroup" colspan="2" | [[Ray tracing (graphics)|Ray{{nbsp}}tracing]] Performance
! scope="col" rowspan="2" | [[Thermal design power|TDP]]<br>([[Watt]]s)
|-
! scope="col" | Base<br />core<br />clock<br />([[Hertz|MHz]])
! scope="col" | Boost<br />core<br />clock<br />([[Hertz|MHz]])
! scope="col" | Memory ([[Transfers per second|GT/s]])
! scope="col" | Size<br />([[Gibibyte|GiB]])
! scope="col" | Bandwidth<br />([[Data-rate units#Gigabyte per second|GB/s]])
! scope="col" | Type
! scope="col" | Bus<br />width<br />([[bit]])
! scope="col" | Pixel<br />([[Pixel|GP]]/s){{efn|name=PixelFillrate}}
! scope="col" | Texture<br />([[Texel (graphics)|GT]]/s){{efn|name=TextureFillrate}}
! scope="col" | [[Half precision floating-point format|Half<br />precision]]
! scope="col" | [[Single precision floating-point format|Single<br />precision]]
! scope="col" | [[Double precision floating-point format|Double<br />precision]]
! scope="col" | [[Tensor]]<br />compute<br />(FP16)
! scope="col" | Rays/s<br />(Billions)
! scope="col" | RTX{{nbsp}}OPS<br />(Trillions)
|-
! style="text-align:left;" | GeForce{{nbsp}}MX110<br>([[Maxwell (microarchitecture)|Maxwell]])<ref name='anand_maxwell_mobile'>{{cite news|last1=Oh|first1=Nate|title=NVIDIA Releases Maxwell-Based Geforce MX130 & MX110 Mobile GPUs|url=https://www.anandtech.com/show/12088/nvidia-releases-maxwell-geforce-mx130-and-mx110-mobile-gpus|access-date=November 28, 2017|publisher=AnandTech|date=November 28, 2017|archive-url=https://web.archive.org/web/20171128153116/https://www.anandtech.com/show/12088/nvidia-releases-maxwell-geforce-mx130-and-mx110-mobile-gpus|archive-date=November 28, 2017|url-status=dead}}</ref><ref>{{cite web|title=Geforce MX110 Specifications|url=https://www.geforce.com/hardware/notebook-gpus/nvidia-geforce-mx110/specifications|website=NVIDIA|access-date=November 28, 2017|archive-url=https://web.archive.org/web/20171224171035/https://www.geforce.com/hardware/notebook-gpus/nvidia-geforce-mx110/specifications|archive-date=December 24, 2017|url-status=live}}</ref>
| rowspan="2" |{{Dts|2017|11|17|format=mdy|abbr=on}}
| GM108<br>(N16V-GMR1-A1)
| rowspan="2" |[[TSMC]]<br>[[32 nm process|28 nm]]
| rowspan="2" {{Dunno}}
| rowspan="2" {{Dunno}}
| rowspan="9" |[[PCIe#PCI Express 3.0|PCIe{{nbsp}}3.0]]<br>×4
| 965
| 993
| rowspan="2" |1.8<br>(DDR3)<br>5<br>(GDDR5)
| 1
| 256:16 :8:3
| rowspan="2" |2
| rowspan="2" |14.4<br />(DDR3)<br />40.1<br />(GDDR5)
| rowspan="2" |[[DDR3]]<br>[[GDDR5]]
| rowspan="13" |64
| <br />(7.944)
| <br />(15.89)
| rowspan="2" {{N/a}}
| <br />(508.4)
| <br />(15.89)
| rowspan="12" {{N/a}}
| rowspan="12" {{N/a}}
| rowspan="12" {{N/a}}
| rowspan="2" |30
|-
! style="text-align:left;" | GeForce{{nbsp}}MX130<br>([[Maxwell (microarchitecture)|Maxwell]])<ref name='anand_maxwell_mobile' /><ref>{{cite web|title=Geforce MX130 Specifications|url=https://www.geforce.com/hardware/notebook-gpus/nvidia-geforce-mx130/specifications|website=NVIDIA|access-date=November 28, 2017|archive-url=https://web.archive.org/web/20171225000045/https://www.geforce.com/hardware/notebook-gpus/nvidia-geforce-mx130/specifications|archive-date=December 25, 2017|url-status=live}}</ref><ref>{{cite web|url=https://www.techpowerup.com/gpudb/3043/geforce-mx130|title=NVIDIA GeForce MX130 Specs|access-date=June 30, 2022|website=TechPowerUp|archive-date=25 April 2018|archive-url=https://web.archive.org/web/20180425031910/https://www.techpowerup.com/gpudb/3043/geforce-mx130|url-status=live}}</ref>
| GM108<br>(N16S-GTR-A1)
| 1122
| 1242
| 1
| 384:24 :8:3
| <br />(9.936)
| <br />(29.81)
| 861.7<br />(953.9)
| 26.93<br />(29.81)
|-
! style="text-align:left;" rowspan="2" |GeForce{{nbsp}}MX150<br>([[Pascal (microarchitecture)|Pascal]])<ref name="GeForce MX150">{{cite news|title=NVIDIA GeForce MX150|url=https://www.notebookcheck.net/NVIDIA-GeForce-MX150-Benchmark-and-Specs-of-the-GT-1030-for-Laptops.223530.0.html|access-date=December 8, 2017|archive-url=https://web.archive.org/web/20171207003057/https://www.notebookcheck.net/NVIDIA-GeForce-MX150-Benchmark-and-Specs-of-the-GT-1030-for-Laptops.223530.0.html|archive-date=December 7, 2017|website=NotebookCheck|url-status=live}}</ref><ref>{{cite web|title=Comparison: NVIDIA GeForce MX150 vs NVIDIA GeForce 940MX|url=https://www.notebookcheck.net/Comparison-NVIDIA-GeForce-MX150-vs-NVIDIA-GeForce-940MX.242750.0.html|access-date=October 27, 2017|archive-url=https://web.archive.org/web/20171027231616/https://www.notebookcheck.net/Comparison-NVIDIA-GeForce-MX150-vs-NVIDIA-GeForce-940MX.242750.0.html|archive-date=October 27, 2017|website=NotebookCheck|date=21 August 2017 |url-status=live}}</ref><ref>{{cite web|title=NVIDIA Announces GeForce MX150: Entry-Level Pascal for Laptops, Just in Time for Computex|url=https://www.anandtech.com/show/11449/nvidia-announces-geforce-mx150-for-laptops|access-date=October 27, 2017|archive-url=https://web.archive.org/web/20171027232702/https://www.anandtech.com/show/11449/nvidia-announces-geforce-mx150-for-laptops|archive-date=October 27, 2017|website=AnandTech|url-status=dead}}</ref>
| rowspan="2" |{{Dts|2017|05|17|format=mdy|abbr=on}}
| GP108<br>(N17S-LG-A1)
| rowspan="7" |[[Samsung]]<br>[[14 nm]]
| rowspan="6" |1.8
| rowspan="6" |74
| 937
| 1038
| 5
| 0.5
| rowspan="2" |384:24 :16:3
| rowspan="2" |2<br>4
| 40
| rowspan="8" |[[GDDR5]]
| <br />(14.99)
| <br />(22.49)
| 11.24<br />(12.45)
| 719.6<br />(797.2)
| 22.49<br />(24.91)
| 10
|-
| GP108-650-A1<br>(N17S-G1-A1)
| 1468
| 1532
| 6
| 0.5
| 48
| <br />(23.49)
| <br />(35.23)
| 17.62<br />(18.38)
| 1127<br />(1177)
| 35.23<br />(36.77)
| 25
|-
! scope="row" style="text-align:left;" | GeForce{{nbsp}}MX230<br>([[Pascal (microarchitecture)|Pascal]])<ref name="GeForce MX230">{{cite web|title=NVIDIA GeForce MX230|url=https://www.notebookcheck.net/NVIDIA-GeForce-MX230-Graphics-Card.382351.0.html|website=NotebookCheck|access-date=October 15, 2019|archive-date=16 April 2020|archive-url=https://web.archive.org/web/20200416103726/https://www.notebookcheck.net/NVIDIA-GeForce-MX230-Graphics-Card.382351.0.html|url-status=live}}</ref>
| rowspan="3" |{{Dts|2019|02|20|format=mdy|abbr=on}}
| GP108<br>(N17S-G0-A1)
| 1519
| 1531
| rowspan="5" |7
| rowspan="5" |0.5
| 256:16 :16:2
| rowspan="5" |2<br>4
| 56
| <br />(25.31)
| <br />(25.31)
| <br />(12.66)
| <br />(810.0)
| <br />(25.31)
| 10
|-
! scope="row" rowspan="2" style="text-align:left;" | GeForce{{nbsp}}MX250<br>([[Pascal (microarchitecture)|Pascal]])<ref name="GeForce MX250">{{cite web|title=NVIDIA GeForce MX250|url=https://www.notebookcheck.net/NVIDIA-GeForce-MX250-Graphics-Card.382341.0.html|website=NotebookCheck|access-date=October 15, 2019|archive-date=16 April 2020|archive-url=https://web.archive.org/web/20200416105136/https://www.notebookcheck.net/NVIDIA-GeForce-MX250-Graphics-Card.382341.0.html|url-status=live}}</ref><ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-mx250.c3354|title=NVIDIA GeForce MX250 Specs|website=TechPowerUp|language=en|access-date=March 3, 2020|archive-date=3 March 2020|archive-url=https://web.archive.org/web/20200303080940/https://www.techpowerup.com/gpu-specs/geforce-mx250.c3354|url-status=live}}</ref>
| GP108<br>(N17S-LG-A1)
| 937
| 1038
| rowspan="3" |384:24 :16:3
| 48
| <br />(16.61)
| <br />(24.91)
| <br />(12.46)
| <br />(797.2)
| <br />(24.91)
| 10
|-
| GP108<br>(N17S-G2-A1)
| 1518
| 1582
| rowspan="4" |56
| <br />(24.3)
| <br />(36.4)
| <br />(18.98)
| <br />(1166)
| <br />(37.97)
| 25
|-
! scope="row" style="text-align:left;" | GeForce{{nbsp}}MX330<br>([[Pascal (microarchitecture)|Pascal]])<ref>{{cite web|url=https://www.notebookcheck.net/Graphics-Card-Comparison-Head-2-Head.247598.0.html|title=Graphics Card Comparison - Head 2 Head|last=Redaktion|website=NotebookCheck|language=en|access-date=February 17, 2020|archive-date=17 February 2020|archive-url=https://web.archive.org/web/20200217115933/https://www.notebookcheck.net/Graphics-Card-Comparison-Head-2-Head.247598.0.html|url-status=live}}</ref>
| rowspan="2" |{{Dts|2020|02|12|format=mdy|abbr=on}}
| GP108-655-A1<br>(N17S-LP-A1)<br>(N17S-G3-A1)
| 746 (LP)<br />1531
| 936 (LP)<br />1594
| <br />(25.50)
| <br />(38.26)
| <br />(19.13)
| <br />(1224)
| <br />(38.26)
| 10–30
|-
! scope="row" style="text-align:left;" | GeForce{{nbsp}}MX350<br>([[Pascal (microarchitecture)|Pascal]])<ref>{{cite web|url=https://www.anandtech.com/show/15515/nvidia-quietly-reveals-geforce-mx350-mx330|title=NVIDIA Quietly Reveals GeForce MX350 & MX330: 2020's Entry-Level Laptop GeForce|last=Smith|first=Ryan|date=February 14, 2020|website=AnandTech|access-date=February 17, 2020|archive-date=16 February 2020|archive-url=https://web.archive.org/web/20200216000631/https://www.anandtech.com/show/15515/nvidia-quietly-reveals-geforce-mx350-mx330|url-status=dead}}</ref>
| GP107-670-A1<br>(N17S-LP-A1)<br>(N17S-G5-A1)
| 3.3
| 132
| 747 (LP)<br />1354
| 937 (LP)<br />1468
| 640:32 :16:5
| <br />(23.49)
| <br />(46.98)
| <br />(29.36)
| <br />(1879)
| <br />(58.72)
| 15–25
|-
! style="text-align:left;" rowspan=2| GeForce{{nbsp}}MX450<br>([[Turing (microarchitecture)|Turing]])<ref>{{cite web|url=https://www.nvidia.com/en-us/geforce/gaming-laptops/mx-450/|title=NVIDIA GeForce MX450 Laptop Graphics|website=NVIDIA|access-date=22 September 2020|archive-date=28 August 2020|archive-url=https://web.archive.org/web/20200828175713/https://www.nvidia.com/en-us/geforce/gaming-laptops/mx-450/|url-status=live}}</ref><ref>{{cite web|url=https://www.notebookcheck.net/NVIDIA-GeForce-MX450-N18S-G5-GPU-Benchmarks-and-Specs.461020.0.html|title=NVIDIA GeForce MX450|website=NotebookCheck|access-date=3 September 2020|archive-date=2 September 2020|archive-url=https://web.archive.org/web/20200902221856/https://www.notebookcheck.net/NVIDIA-GeForce-MX450-N18S-G5-GPU-Benchmarks-and-Specs.461020.0.html|url-status=live}}</ref>
| rowspan=2 |{{Dts|2020|08|01|format=mdy|abbr=on}}
| rowspan=2 |TU117<br>(N18S-LP-A1)<br>(N18S-G5-A1)
| rowspan=3 |[[TSMC]]<br>[[14 nm|12FFN]]
| rowspan=3 |4.7
| rowspan=3 |200
| rowspan="4" |[[PCIe#PCI Express 4.0|PCIe{{nbsp}}4.0]]<br>×4
| rowspan="2" |720 (LP)<br />1395
| rowspan="2" |930 (LP)<br />1575
| 7
| rowspan="2" |1
| rowspan="2" |896:56 :32:14
| rowspan=2 |2
| rowspan=2 | 23.04<br />(50.40)
| rowspan=2 | 40.32<br />(88.20)
| rowspan="2" | 2581<br />(5645)
| rowspan="2" | 1290<br />(2822)
| rowspan=2 | 40.32<br />(88.20)
| rowspan=2 |10-30
|-
| 10
| 80
| rowspan=3 |[[GDDR6 SDRAM|GDDR6]]
|-
! scope="row" style="text-align:left;" | GeForce{{nbsp}}MX550<br>([[Turing (microarchitecture)|Turing]])<ref name="notebookcheck-MX550vs570">{{cite web|url=https://www.notebookcheck.net/GeForce-MX550-vs-GeForce-MX570_11110_11112.247598.0.html|title=NVIDIA GeForce MX550 vs NVIDIA GeForce MX570|website=NotebookCheck|access-date=May 23, 2022|archive-date=1 August 2023|archive-url=https://web.archive.org/web/20230801145719/https://www.notebookcheck.net/GeForce-MX550-vs-GeForce-MX570_11110_11112.247598.0.html|url-status=live}}</ref><ref>{{cite web|url=https://www.nvidia.com/en-us/geforce/gaming-laptops/mx-550/|title=GeForce MX550 Dedicated Graphics for Laptops|website=NVIDIA|archive-url=https://web.archive.org/web/20220607031710/http://www.nvidia.com/en-us/geforce/gaming-laptops/mx-550/|archive-date=June 7, 2022|access-date=June 30, 2022}}</ref><ref name="anandtech-MX500">{{cite web|last=Smith|first=Ryan|date=December 17, 2021|title=NVIDIA Announces GeForce RTX 2050, MX570, and MX550 For Laptops: 2022's Entry Level GeForce|url=https://www.anandtech.com/show/17124/nvidia-announces-geforce-rtx-2050-mx570-and-mx550-for-laptops-2022s-entry-level-geforce|access-date=December 18, 2021|website=AnandTech|archive-date=24 February 2024|archive-url=https://web.archive.org/web/20240224182257/https://www3.anandtech.com/show/17124/nvidia-announces-geforce-rtx-2050-mx570-and-mx550-for-laptops-2022s-entry-level-geforce|url-status=dead}}</ref><ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-mx550.c3860|title=NVIDIA GeForce MX550 Specs|website=TechPowerUp|access-date=May 23, 2022}}</ref>
| {{Dts|2022|03||format=mdy|abbr=on}}
| TU117-670-A1<br>(GN18-S5-A1)
| 1065
| 1320
| rowspan="2" |12
| 2
| 1024:32 :16:8
| 2<br>4
| 96
| <br />(21.12)
| <br />(42.24)
| <br />(2703)
| <br />(2703)
| <br />(42.24)
| 15–30
|-
! scope="row" style="text-align:left;" | GeForce{{nbsp}}MX570<br>([[Ampere (microarchitecture)|Ampere]])<ref name="notebookcheck-MX550vs570" /><ref>{{cite web|url=https://www.nvidia.com/en-us/geforce/gaming-laptops/mx-570/|title=GeForce MX570 Dedicated Graphics for Laptops|website=NVIDIA|access-date=June 30, 2022|archive-date=28 November 2023|archive-url=https://web.archive.org/web/20231128205656/https://www.nvidia.com/en-us/geforce/gaming-laptops/mx-570/|url-status=live}}</ref><ref name="anandtech-MX500" /><ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/geforce-mx570.c3861|title=NVIDIA GeForce MX570 Specs|website=TechPowerUp|access-date=May 23, 2022}}</ref>
| {{Dts|2022|03||format=mdy|abbr=on}}
| GA107<br>(GN20-S5-A1)
| [[Samsung]]<br>[[10 nm process|8N]]
| 8.7
| 200
| 1087
| 1155
| 2
| 2048:64 :40:16 :16:64
| 2<br>4
| 96
| <br />(46.20)
| <br />(73.92)
| <br />(4731)
| <br />(4731)
| <br />(73.92)
|
|
|
| 15–45
|-
|}
{{notelist|refs=
{{efn|name=CoreConfig|[[Unified shader model|Shader Processors]] : [[Texture mapping unit]]s : [[Render output unit]]s : Streaming Multiprocessors : [[Nvidia RTX|Ray tracing cores]] : [[Tensor Core]]s}}
{{efn|name=PixelFillrate|Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.}}
{{efn|name=TextureFillrate|Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.}}
{{efn|name=PerfValues|Base clock, Boost clock}}
}}
== Workstation / Mobile Workstation GPUs ==
===Quadro NVS===
{{Further|Quadro}}
* <sup>1</sup> [[Vertex shader]]s: [[pixel shader]]s: [[texture mapping unit]]s: [[render output unit]]s
* <sup>2</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
* <sup>*</sup> NV31, NV34 and NV36 are 2x2 pipeline designs if running vertex shader, otherwise they are 4x1 pipeline designs.
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Shader clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>12*</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]])
! colspan=3 | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
! [[Vulkan]]
|-
!NVS 50
|May 31, 2005
|NV18
| rowspan="3" |150
| rowspan="3" |AGP 4x/PCI
|250
|250
|200
|0:2:4:2
|64
|1.6
| rowspan="6" |DDR
|32
|0.5
|1.0
|
| rowspan="3" |7
| rowspan="4" |1.2
| rowspan="15" | n/a
| rowspan="4" |
|[[DVI-I]], [[S-Video]]
|-
!NVS 100
|Dec 22, 2003
| rowspan="2" |NV17
|200
|
|333
|0:2:4:2
|64
|2.664
|64
|
|
|
|2x DVI-I, [[VGA]], S-Video
|-
!NVS 200
|Dec 22, 2003
|250
|250
|250
|0:2:4:2
|64
|8.0
| rowspan="4" |128
|0.5
|1.0
|
|[[Low Force Helix|LFH-60]]
|-
!NVS 210S
|Dec 22, 2003
|MCP51
|90
|Integrated
|425
|
|
|1:2:2:1
|Up to 256 from system memory
|
|0.425
|0.850
|
|9.0c
|[[Digital Visual Interface|DVI]] + [[VGA]]
|-
!NVS 280
|Oct 28, 2003
|NV34GL
|150
|PCIe x16/AGP 8x / PCI
| rowspan="2" |275
| rowspan="2" |275
|250
|0:2:4:2/<br />1:2:2:2 *:4:4:4
|64
|8.0
| rowspan="2" |0.55
| rowspan="2" |1.1
|
| rowspan="2" |9.0
|1.5
|13
|[[DMS-59]]
|-
!NVS 285
|Jun 6, 2006
|NV44
|110
| rowspan="4" |PCIe x1/x16
|275
|3:4:4:2
|128
|8.8
|
|2.1
|18
|[[DMS-59]]
|-
!NVS 290
|Oct 4, 2007
|G86-827-A2
|80
|460
|920
|800
|16:8:4
| rowspan="2" |256
|6.4
|DDR2
| rowspan="5" |64
|1.84
|3.68
|44.16
| rowspan="2" |10
| rowspan="3" |3.3
|21
|[[DMS-59]]
|-
!NVS 295
|May 7, 2009
|G98
|65
|550
|1300
|1400
|8:8:4
|11.2
|GDDR3
|2.2
|4.4
|31.2
|23
|2x DisplayPort or 2x DVI-D
|-
!NVS 300
|Jan 8, 2011
|GT218
| rowspan="3" |40
|589
|1402
|1580
|16:8:4
| rowspan="2" |512
|12.64
| rowspan="3" |DDR3
|2.356
|4.712
|67.3
|10.1
|17.5
|[[DMS-59]]
|-
!NVS 310
|Jun 26, 2012
| rowspan="2" |GF119
| rowspan="2" |PCIe x16
| rowspan="2" |523
| rowspan="2" |1046
| rowspan="2" |1750
| rowspan="2" |48:8:4
| rowspan="2" |14
| rowspan="2" |2.092
| rowspan="2" |4.184
|100.4
| rowspan="2" |11.0
| rowspan="2" |4.1
| rowspan="2" |19.5
|2x DisplayPort
|-
!NVS 315
|Mar 10, 2013
|1024
|
|[[DMS-59]] Idle Power Consumption 7 W
|-
!NVS 400
|Jul 16, 2004
|2x NV17
|150
|PCI
|220
|220
|332
|2x 0:2:4:2
|2x 64
|2x 11.0
|DDR
|2x 128
|2x 0.44
|2x 0.88
|2x 5.328
|7
|1.2
|18
|2x [[Low Force Helix|LFH-60]]
|-
!NVS 420
|Jan 20, 2009
|2xG98-850-U2
|65
| rowspan="3" |PCIe x1/x16
|550
|1300
|1400
|2x 8:8:4
|2x 256
|2x 11.2
|GDDR3
|2x 64
|2x 2.2
|2x 4.4
|2x 31.2
|10
|3.3
|40
|through VHDCI to (4x DisplayPort or 4x DVI-D)
|-
!NVS 440
|Feb 14, 2006
|2xNV43
|110
|250
|
|500
|2x 4:8:8:8
|2x 128
|2x 8.000
|DDR
|2x 128
|2x 2.000
|2x 2.000
|
|9.0
|2.1
|31
|2x [[DMS-59]]<ref>{{cite web |title=Nvidia Quadro NVS Technical Specifications |url=http://http.download.nvidia.com/ndemand/Quadro_extranet/Product_Overview/PO_QUADRO_NVS_MAY06_REV2.pdf |work=Nvidia press release |access-date=2014-05-15 |archive-url=https://web.archive.org/web/20120914092314/http://http.download.nvidia.com/ndemand/Quadro_extranet/Product_Overview/PO_QUADRO_NVS_MAY06_REV2.pdf |archive-date=2012-09-14 |url-status=live }}</ref>
|-
!NVS 450
|Nov 11, 2008
|2xG98
|65
|550
|1300
|1400
|2x 8:8:4
|2x 256
|2x 11.2
|GDDR3
|2x 64
|2x 2.2
|2x 4.4
|2x 31.2
|10
|3.3
| rowspan="2" |35
|4x DisplayPort
|-
!NVS 510
|Oct 23, 2012
|GK107
| rowspan="2" | 28
|PCIe 2.0 x16
|797
|
|1782
|192:16:8<br />(1 SMX)
|2048
|28.5
| rowspan="2" |DDR3
|128
|3.188
|12.75
|306.0
| rowspan="2" |11.0
| rowspan="2" |4.6
| 1.2
|4x miniDisplayPort
|-
!NVS 810
|Nov 4, 2015
|2x GM107
|PCIe 3.0 x16
|1033
|
|1800
|2x 512:32:16
|2x 2048
|2x 14.4
|2x 64
|16.53
|33.06
|1058
|1.3
|68
|8x miniDisplayPort
|}
=== Mobility Quadro NVS series ===
{{Further|Quadro}}
* <sup>1</sup> [[Vertex shader]]s: [[pixel shader]]s: [[texture mapping unit]]s: [[render output unit]]s
* <sup>2</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Shader clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan=2 | Core config<sup>12</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]])
! colspan=2 | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
|-
!Quadro NVS 110M
|Jun 1, 2006
|G72M
| rowspan="2" |90
| rowspan="2" |PCIe 1.0 x16
|300
|300
|600
| rowspan="2" |3:4:4:2
| rowspan="2" |Up to 512
|4.8
|DDR
| rowspan="8" |64
|0.6
|1.2
| rowspan="2" |
|9.0c
|2.1
| rowspan="6" |10
| rowspan="9" |
|-
!Quadro NVS 120M
|Jun 1, 2006
|G72GLM
|450
|450
| rowspan="2" |700
|5.6
| rowspan="2" |DDR2
|0.9
|1.8
|9.0c
|2.1
|-
!Quadro NVS 130M
|May 9, 2007
| rowspan="3" |G86M
| rowspan="3" |80
| rowspan="5" |PCIe 2.0 x16
|400?
|800?
|8:4:4
| rowspan="2" |Up to 256
|6.4?
|1.6?
|1.6?
|19.2
| rowspan="5" |10.0
| rowspan="5" |3.3
|-
!Quadro NVS 135M
|May 9, 2007
| rowspan="2" |400
| rowspan="2" |800
|1188
|16:8:4
|9.504
| rowspan="4" |GDDR3
| rowspan="2" |1.6
| rowspan="2" |3.2
|38.4
|-
!Quadro NVS 140M
|May 9, 2007
|1200
|16:8:4
|Up to 512
|9.6
|38.4
|-
!Quadro NVS 150M
|Aug 15, 2008
| rowspan="2" |G98M
| rowspan="2" |65
|530
|1300
| rowspan="2" |1400
|8:4:4
|Up to 256
| rowspan="2" |11.2
| rowspan="2" |2.12
|2.12
|31.2
|-
!Quadro NVS 160M
|Aug 15, 2008
|580
|1450
|8:8:4
|256
|4.24
|34.8
|12
|-
!Quadro NVS 300M
|
|G72GLM
|90
|PCIe 1.0 x16
|450
|450
|1000
|3:4:4:2
| rowspan="2" |Up to 512
|8
|DDR2
|0.9
|1.8
|
|9.0c
|2.1
|16
|-
!Quadro NVS 320M
|Jun 9, 2007
|G84M
|65
|PCIe 2.0 x16
|575
|1150
|1400
|32:16:8
|22.4
| rowspan="2" |GDDR3
|128
|4.6
|9.2
|110.4
|10.0
|3.3
|20
|-
!Quadro NVS 510M
|Aug 21, 2006
|G72GLM
|90
|PCIe 1.0 x16
|500
|500
|1200
|8:24:24:16
|Up to 1024
|38.4
|256
|8
|12
|
|9.0c
|2.1
|45?
|based on Go 7900 GTX
|}
===Mobility NVS series===
{{Further|Quadro}}
* <sup>1</sup>[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Shader clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]])
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
![[OpenCL]]
![[CUDA]]
|-
!NVS 2100M
|Jan 7, 2010
| rowspan="2" |GT218M
| rowspan="4" |40
| rowspan="6" |PCIe 2.0 x16
|535
|1230
| rowspan="4" |1600
| rowspan="2" |16:8:4
| rowspan="2" |Up to 512
| rowspan="3" |12.8
| rowspan="6" |GDDR3
| rowspan="3" |64
|2.14
|4.28
|59.04
| rowspan="2" |10.1
| rowspan="2" |3.3
| rowspan="6" |1.1
| rowspan="2" |1.2
| rowspan="2" |14
|
|-
!NVS 3100M
|Jan 7, 2010
|600
|1470
|2.4
|4.8
|70.56
|based on G210M/310M
|-
!NVS 4200M
|Jan 7, 2010
|GF119
|810
|1620
|48:8:4
| rowspan="4" |Up to 1024
|3.24
|6.48
|155.52
|11
|4.5
|2.1
|25
|based on GT 520M
|-
!NVS 5100M
|Feb 22, 2011
|GT216M
|550
|1210
|48:16:8
|25.6
|128
|4.4
|8.8
|174.24
|10.1
|3.3
|1.2
| rowspan="3" |35
| rowspan="3" |
|-
!NVS 5200M
|Jun 1, 2012
| rowspan="2" |GF108
GF117<ref>{{cite web|title=NVIDIA GF117 GPU Specs|url=https://www.techpowerup.com/gpu-specs/nvidia-gf117.g110|access-date=2022-01-03|website=TechPowerUp|language=en}}</ref>
| rowspan="2" |40/28
|625
|1250
| rowspan="2" |1800
| rowspan="2" |96:16:4
|14.4
|64
|2.5
|10
|240
| rowspan="2" |11
| rowspan="2" |4.5
| rowspan="2" |2.1
|-
!NVS 5400M
|Jun 1, 2012
|660
|1320
|28.8
|128
|2.64
|10.56
|253.44
|}
===Quadro===
{{Further|Quadro}}
* <sup>1</sup> [[Vertex shader]]s: [[pixel shader]]s: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! [[OpenGL]]
|-
!Quadro
| rowspan="2" |NV10GL
| rowspan="2" |220
| rowspan="6" |AGP 4x
|135
|166
| rowspan="2" |0:4:4:4
| rowspan="5" |64
|2.66
|SDR
| rowspan="14" |128
| rowspan="2" |0.54
| rowspan="2" |0.54
| rowspan="5" |7
| rowspan="5" |1.2
| rowspan="14" |
| rowspan="11" |
|-
!Quadro DDR
|135
|333
|5.312
|DDR
|-
!Quadro2 MXR
| rowspan="2" |NV11GL
| rowspan="2" |180
|200
|183
| rowspan="2" |0:2:4:2
||2.93
| rowspan="2" |SDR
|0.4
|0.4
|-
!Quadro2 EX
|175
|166
|2.7
|0.35
|0.35
|-
!Quadro2 PRO
|NV15GL
|150
|250
|400
|0:4:8:4
|6.4
| rowspan="3" |DDR
|1
|2
|-
!Quadro DCC
||NV20GL
|180
|200
|460
|1:4:8:4
| rowspan="3" |128
|7.4
|0.8
|1.6
|8.0
|1.3
|-
!Quadro4 380XGL
|NV18GL
| rowspan="8" |150
|AGP 8x
|275
|513
| rowspan="4" |0:2:4:2
|8.2
|0.55
|1.1
| rowspan="4" |7
| rowspan="4" |1.2
|-
!Quadro4 500XGL
| rowspan="2" |NV17GL
| rowspan="2" |AGP 4x
|250
|166
|2.7
|SDR
|0.5
|1
|-
!Quadro4 550XGL
|270
| rowspan="2" |400
| rowspan="3" |64
| rowspan="2" |6.4
| rowspan="6" |DDR
|0.59
|1.08
|-
!Quadro4 580XGL
|NV18GL
|AGP 8x
|300
|0.6
|1.2
|-
!Quadro4 700XGL
| rowspan="3" |NV25
| rowspan="3" |AGP 4x
| rowspan="2" |275
| rowspan="2" |550
| rowspan="4" |2:4:8:4
| rowspan="2" |8.8
| rowspan="2" |1.1
| rowspan="2" |2.2
| rowspan="4" |8.1
| rowspan="4" |1.3
|-
!Quadro4 750XGL
| rowspan="3" |128
| rowspan="3" |Stereo display
|-
!Quadro4 900XGL
| rowspan="2" |300
| rowspan="2" |650
| rowspan="2" |10.4
| rowspan="2" |1.2
| rowspan="2" |2.4
|-
!Quadro4 980XGL
|NV28GL
|AGP 8x
|-
! rowspan=2 | Model
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! [[OpenGL]]
! rowspan=2 | TDP (Watts)
! rowspan=2 | Features
|-
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Supported API version
|}
===Quadro Go (GL) & Quadro FX Go series===
{{Further|Quadro}}
Early mobile Quadro chips based on the GeForce2 Go up to GeForce Go 6800. Precise specifications on these old mobile workstation chips are very hard to find, and conflicting between Nvidia press releases and product lineups in GPU databases like TechPowerUp's GPUDB.
* <sup>1</sup> [[Vertex shader]]s: [[pixel shader]]s: [[texture mapping unit]]s: [[render output unit]]s
* <sup>2</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>12</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! [[OpenGL]]
|-
!Quadro2 Go<ref>{{cite web |url=http://www.nvidia.com/page/quadro2go.html |title=Nvidia Quadro2 Go |website=Nvidia.com |date=2001-08-14 |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151208114101/http://www.nvidia.com/page/quadro2go.html |archive-date=2015-12-08 |url-status=live }}</ref>
|August 14, 2001<ref>{{cite web|title=NVIDIA Quadro2 Go Specs|url=https://www.techpowerup.com/gpu-specs/quadro2-go.c3385|access-date=2021-04-29|website=TechPowerUp|language=en}}</ref><ref>{{cite web|date=2005-02-11|title=Press Release|url=http://www.nvidia.com/object/IO_20010813_8679.html|archive-url=https://web.archive.org/web/20050211062814/http://www.nvidia.com/object/IO_20010813_8679.html|url-status=dead|archive-date=2005-02-11|access-date=2021-04-29}}</ref>
|NV11 GLM
|180
|AGP 4x
|143
|130<br />360
|2:0:4:2
|32<br />64
|2.9<br />5.8
|SDR<br />DDR
| rowspan="2" |128
|0.286
|0.592
| rowspan="2" |7.0
| rowspan="2" |1.2
| rowspan="6" |
|First mobile Quadro based on the GeForce2 Go, Dynamic core clock 100–143 MHz, dynamic voltage 1.575V, graphics core listed as 64-bit and 128-bit for DDR and SDR SDRAM types respectively according to Nvidia<ref>{{cite web |url=http://www.nvidia.com/object/LO_20010612_4394.html |title=Nvidia Mobile GPU Solutions |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20080907072309/http://www.nvidia.com/object/LO_20010612_4394.html |archive-date=2008-09-07 |url-status=live }}</ref><ref name="nvidia.com">{{cite web |url=http://www.nvidia.com/object/LO_20020517_4391.html |title=The Standard for Mobile Workstations |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20080704212000/http://www.nvidia.com/object/LO_20020517_4391.html |archive-date=2008-07-04 |url-status=live }}</ref>
|-
!Quadro4 500 Go GL<ref name="nvidia3">{{cite web |url=http://www.nvidia.com/page/quadro4gogl.html |title=Nvidia Quadro4 Go GL |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151208163504/http://www.nvidia.com/page/quadro4gogl.html |archive-date=2015-12-08 |url-status=live }}</ref><ref>{{cite web |url=http://www.nvidia.com/object/LO_20020425_3977.html |title=The Mobile Workstation GPU |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20081015235440/http://www.nvidia.com/object/LO_20020425_3977.html |archive-date=2008-10-15 |url-status=live }}</ref>
|April 23, 2002<ref>{{cite web|title=NVIDIA Announces Quadro4 500 Go GL for Mobile Workstation|url=https://forum.beyond3d.com/threads/nvidia-announces-quadro4-500-go-gl-for-mobile-workstation.489/|access-date=2021-04-29|website=Beyond3D Forum|language=en-US|archive-date=2021-04-29|archive-url=https://web.archive.org/web/20210429175433/https://forum.beyond3d.com/threads/nvidia-announces-quadro4-500-go-gl-for-mobile-workstation.489/|url-status=dead}}</ref>
|NV17 GLM
|150
| rowspan="2" |AGP 4x,8x
|220
|220
|2:0:4:2
| rowspan="2" |64
|7.0
| rowspan="5" |DDR
|0.44
|0.88
|Based on the GeForce4 Go, dynamic core clock 66–220 MHz, core voltage 1.35 V, uses DDR SDRAM according to Nvidia brochures<ref name="nvidia.com"/>
|-
!Quadro4 700 Go GL<ref name="nvidia3"/><ref>{{cite web |url=http://www.nvidia.com/object/LO_20030203_7731.html |title=Nvidia Mobile GPU Solutions |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20081007200227/http://www.nvidia.com/object/LO_20030203_7731.html |archive-date=2008-10-07 |url-status=live }}</ref><ref>{{cite web |url=http://www.techpowerup.com/gpudb/1828/quadro4-700-go-gl.html |title=Nvidia Quadro4 700 Go GL {{pipe}} techPowerUp GPU Database |website=Techpowerup.com |access-date=2015-12-11 }}{{dead link|date=June 2022|bot=medic}}{{cbignore|bot=medic}}</ref>
|February 5, 2003<ref>{{cite web|title=NVIDIA Unleashes Quadro4 700 Go GL. Welcome, NV28M|url=https://www.neowin.net/forum/topic/63227-nvidia-unleashes-quadro4-700-go-gl-welcome-nv28m/|access-date=2021-04-29|website=Neowin|date=5 February 2003 |language=en-GB}}</ref><ref>{{cite web|title=Quadro4 700 Go GL, Open GL sui notebook|url=https://edge9.hwupgrade.it/news/device/quadro4-700-go-gl-open-gl-sui-notebook_9225.html|access-date=2021-04-29|website=Hardware Upgrade|language=it-IT}}</ref>
|NV28 GLM
|150
|176
|200
|4:2:4:4
|7.4?
|128?
|0.704
|0.704
|8.1
|1.3
|Based on GeForce4 Go 4200, uses DDR according to Nvidia brochures
|-
!Quadro FX Go 700<ref>{{cite web |url=http://www.techpowerup.com/gpudb/1829/quadro-fx-go700.html |title=Nvidia Quadro FX Go700 {{pipe}} techPowerUp GPU Database |website=Techpowerup.com |access-date=2015-12-11 }}{{dead link|date=June 2022|bot=medic}}{{cbignore|bot=medic}}</ref>
|
|NV31 GLM
|130
| rowspan="2" |AGP 8x
| rowspan="2" |295
|590
|4:2:4:4
| rowspan="2" |128
|9.44?
| rowspan="2" |128
| rowspan="2" |1.18
| rowspan="2" |1.18
| rowspan="3" |9.0
| rowspan="3" |2.1?
|Slightly underclocked Geforce FX 5600 Go
|-
!Quadro FX Go 1000<ref name="techpowerup.com">{{cite web |url=http://www.techpowerup.com/gpudb/1830/quadro-fx-go1000.html |title=Nvidia Quadro FX Go1000 {{pipe}} techPowerUp GPU Database |website=Techpowerup.com |access-date=2015-12-11 }}{{dead link|date=June 2022|bot=medic}}{{cbignore|bot=medic}}</ref>
|February 2004?
|NV36 GLM
|130?
|570
|4:3:4:4
|9.12?
|Based on TSMC 130 nm process with one extra pixel shader?
|-
!Quadro FX Go 1400<ref name="techpowerup.com"/>
|February 25, 2005<ref>{{cite web|last=Hinum|first=Klaus|title=NVIDIA Quadro FX Go 1000|url=https://www.notebookcheck.net/NVIDIA-Quadro-FX-Go-1000.33764.0.html|access-date=2021-04-29|website=Notebookcheck|language=en}}</ref>
|NV41 GLM
|130
|PCIe
|275
|590
|12:5:12:12?
|256
|18.9?F
|256
|2.20
|2.2
|Last chip designated as a Quadro FX Go, uses PCIe instead of AGP 8x. Core config has been mentioned as either 8:5:8:8 or 12:5:12:12 - the latter is likely since chip is derived from GeForce Go 6800.
|}
===Quadro FX series===
{{Further|Quadro}}
* <sup>1</sup> [[Vertex shader]]s: [[pixel shader]]s: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1*</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! [[OpenGL]]
|-
!Quadro FX 500
|NV34GL
| rowspan="3" |150
|AGP 8x
|270
|480
| rowspan="2" |2:4:4:4
|128
|7.68
| rowspan="3" |DDR
| rowspan="6" |128
|1.08
|1.08
|9.0
| rowspan="8" |2.0
| rowspan="8" |
| rowspan="7" |Stereo display
|-
!Quadro FX 600
|NV34GL
|PCI
|350
|480
|128
|7.68
|1.4
|1.4
|
|-
!Quadro FX 700
|NV35GL
| rowspan="8" |AGP 8x
|275
|550
|3:4:8:4
| rowspan="4" |128
|8.8
|1.1
|2.2
|
|-
!Quadro FX 1000
|NV30GL
| rowspan="7" |130
|300
|600
|
|9.6
|GDDR2
|1.2
|2.4
|
|-
!Quadro FX 1100
|NV36GL
|425
|650
|3:4:4:4
|10.4
|DDR2
|1.7
|1.7
|
|-
!Quadro FX 2000
|NV30GL
| rowspan="3" |400
|800
|2:4:8:4
|12.8
|GDDR2
| rowspan="3" |1.6
| rowspan="3" |3.2
|
|-
!Quadro FX 3000
| rowspan="2" |NV35GL
| rowspan="2" |850
| rowspan="2" |3:4:8:4
| rowspan="4" |256
|27.2
| rowspan="2" |DDR
| rowspan="4" |256
|
|-
!Quadro FX 3000G
|27.2
|
|Stereo display, [[Genlock]]
|-
!Quadro FX 4000
| rowspan="2" |NV40GL
| rowspan="2" |375
| rowspan="2" |1000
| rowspan="2" |6:12:12:12
| rowspan="2" |32.0
| rowspan="2" |GDDR3
| rowspan="2" |4.5
| rowspan="2" |4.5
|9.0c
| rowspan="2" |2.1
|142
|Stereo display
|-
!Quadro FX 4000 SDI
|
|
|Stereo display, Genlock
|}
===Quadro FX (x300) series===
{{Further|Quadro}}
* <sup>1</sup> [[Vertex shader]]s: [[pixel shader]]s: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=2 | [[Fillrate]]
! colspan=4 | Memory
! colspan=2 | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
|-
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! [[Direct3D]]
! [[OpenGL]]
|-
!Quadro FX 330
|NV37GL
|150
| rowspan="2" |PCIe x16
|250
|400
|2:4:4:4
|1
|1
|64
|3.2
| rowspan="2" |DDR
|128
| rowspan="2" |9.0
|2.0
|21
|-
!Quadro FX 1300
|NV38
|130
|350
|550
|3:4:8:4
|1.4
|2.8
|128
|17.6
|256
|2.1
|55
|}
===Quadro FX (x400) series===
{{Further|Quadro}}
* <sup>1</sup> [[Vertex shader]]s: [[pixel shader]]s: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! [[OpenGL]]
|-
!Quadro FX 540
|NV43GL
|90
| rowspan="5" |PCIe x16
|300
|550
|4:8:8:8
| rowspan="2" |128
|8.8
|GDDR3
|128
|2.4
|2.4
| rowspan="5" |9.0c
| rowspan="5" |2.1
|35
|
|-
!Quadro FX 1400
|NV41
| rowspan="2" |130
| rowspan="2" |350
|600
|5:8:8:8
|19.2
|DDR
| rowspan="4" |256
|2.8
|2.8
|75
| rowspan="4" |Stereo display, [[Scalable Link Interface|SLI]]
|-
!Quadro FX 3400
|NV45GL/ NV40
|900
| rowspan="2" |5:12:12:12
| rowspan="2" |256
|28.8
| rowspan="3" |GDDR3
|4.2
|4.2
|101
|-
!Quadro FX 3450
|NV41
|110
|425
|1000
|32.0
|5.1
|5.1
|83
|-
!Quadro FX 4400
|NV45GL A3/ NV40
|130
|400
|1050
|6:16:16:16
|512
|33.7
|6.4
|6.4
|110
|}
===Quadro FX (x500) series===
{{Further|Quadro}}
* <sup>1</sup> [[Vertex shader]]s: [[pixel shader]]s: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! [[OpenGL]]
|-
!Quadro FX 350
|G72
|90
| rowspan="10" |PCIe x16
|550
| rowspan="2" |400
|3:4:4:2
| rowspan="2" |128
|6.4
|DDR2
|64
|1.1
|2.2
|9.0c
| rowspan="10" |2.1
|21
| rowspan="3" |
|-
!Quadro FX 550
|NV43
|110
|360
|4:8:8:8
|12.8
| rowspan="7" |GDDR3
|128
|2.88
|2.88
|
|30
|-
!Quadro FX 1500
| rowspan="2" |G71
| rowspan="2" |90
|325
|625
|6:16:16:16
| rowspan="2" |256
|40.0
| rowspan="4" |256
|5.2
|5.2
|
|65
|
!Quadro FX 3500
|450
|660
|7:20:20:16
|42.2
|7.2
|9
|
|80
| rowspan="2" |Stereo display, [[Scalable Link Interface|SLI]]
|-
!Quadro FX 4500
| rowspan="2" |G70
| rowspan="2" |110
| rowspan="2" |430
| rowspan="2" |525
| rowspan="2" |8:24:24:16
| rowspan="2" |512
| rowspan="2" |33.6
| rowspan="2" |6.88
| rowspan="2" |10.3
|
|109
|-
!Quadro FX 4500 SDI
|
|116
|Stereo display, Genlock
|-
!Quadro FX 4500X2
| rowspan="4" |G71
| rowspan="4" |90
|500
|605
|2x 8:24:24:16
|2x
512
|2x
38.7
|2x
256
|2x
8
|2x
12
|
|145
| rowspan="4" |Stereo display, [[Scalable Link Interface|SLI]], Genlock
|-
!Quadro FX 4500
rev. A2
| rowspan="3" |650
|800
| rowspan="3" |8:24:24:16
|512
|51.2
| rowspan="3" |256
| rowspan="3" |10.4
| rowspan="3" |15.6
|
|105
|-
!Quadro FX 5500
| rowspan="2" |505
| rowspan="2" |1024
| rowspan="2" |32.3
| rowspan="2" |DDR2
|
|96
|-
!Quadro FX 5500 SDI
|
|104
|}
Quadro FX (x500M) series. GeForce 7-Series based.
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Direct3D]]
! [[OpenGL]]
|-
!Quadro FX 350M
|Mar 13, 2006
|G72GLM
| rowspan="4" |90
| rowspan="4" |PCIe 1.0 x16
|450
|900
|3:4:4:2
|256
|14.4
| rowspan="4" |GDDR3
|128
|0.9
|1.8
| rowspan="4" |9.0c
| rowspan="4" |2.1
| rowspan="4" |15
|-
!Quadro FX 1500M
|Apr 18, 2006
| rowspan="3" |G71GLM
|375
|1000
| rowspan="3" |8:24:24:16
| rowspan="3" |512
|32
| rowspan="3" |256
|6
|9
|-
!Quadro FX 2500M
|Sep 29, 2005
|500
| rowspan="2" |1200
| rowspan="2" |38.4
|8
|12
|-
!Quadro FX 3500M
|Mar 1, 2007
|575
|9.2
|13.8
|}
===Quadro FX (x600) series===
{{Further|Quadro}}
* <sup>1</sup> [[Vertex shader]]s: [[pixel shader]]s: [[texture mapping unit]]s: [[render output unit]]s
* <sup>2</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Shader clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>12</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]])<ref name="NVcomp1">{{cite web|title=Quadro-Powered All-In-One Workstations|url=http://www.nvidia.com/content/PDF/product-comparison/Quadro-Product-Comparison.pdf|url-status=live|archive-url=https://web.archive.org/web/20130624012038/http://www.nvidia.com/content/PDF/product-comparison/Quadro-Product-Comparison.pdf|archive-date=2013-06-24|access-date=2015-12-11|publisher=Nvidia}}</ref><ref name="NVcomp2">{{cite web|date=2010-10-25|title=Quadro FX series|url=http://hgpu.org/?cat=92|url-status=live|archive-url=https://web.archive.org/web/20160113203735/http://hgpu.org/?cat=92|archive-date=2016-01-13|access-date=2015-12-11|website=Hgpu.org}}</ref>
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Direct3D]]
! [[OpenGL]]
![[OpenCL]]
![[CUDA]]
|-
!Quadro FX 560
|Apr 20, 2006
|G73GL
| rowspan="4" |90
| rowspan="3" |PCIe x16
|350
|350
|1200
|5:12:12:8
|128
|19.2
| rowspan="4" |GDDR3
|128
|2.8
|4.2
|
|
|9.0c
|2.1
| -
| -
|30
|
|-
!Quadro FX 4600<sup>2</sup>
|Mar 5, 2007
| rowspan="2" |G80-850-A2 + NVIO-1-A3
| rowspan="2" |500
| rowspan="2" |1200
| rowspan="2" |1400
|96:24:24
| rowspan="2" |768
| rowspan="2" |67.2
| rowspan="3" |384
| rowspan="2" |12
| rowspan="2" |24
|345
| rowspan="3" | -
| rowspan="3" |10.0
| rowspan="3" |3.3
| rowspan="2" |1.1
| rowspan="2" |1.0
|134
| rowspan="3" |Stereo display, [[Scalable Link Interface|SLI]], Genlock
|-
!Quadro FX 4600 SDI<sup>2</sup>
|Mar 5, 2007
|96:24:24
|345
|154
|-
!Quadro FX 5600<sup>2</sup>
|Mar 5, 2007
|G80-875-A2 + NVIO-1-A3
|PCIe 2.0 x16
|600
|1350
|1600
|128:32:24
|1536
|76.8
|14.4
|38.4
|518.4
| -
| -
|171
|}
GeForce 8-Series (except FX 560M and FX 3600M) based. First Quadro Mobile line to support DirectX 10.
* <sup>1</sup>[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Shader clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]])
! colspan=2 | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
|-
!Quadro FX 360M
|May 9, 2007
|G86M
|80
| rowspan="4" |PCIe 1.0 x16
|400
|800
| rowspan="2" |1200
|16:8:4
|256
|9.6
|DDR2
|64
|1.6
|3.2
|38.4
|10.0
|3.3
|17
|Based on the GeForce 8400M GS
|-
!Quadro FX 560M
|Apr 20, 2006
|G73GLM
|90
|500
|500
|5:12:12:8
| rowspan="3" |512
|19.2
| rowspan="3" |GDDR3
| rowspan="2" |128
|4
|6
|
|9.0c
|2.1
|35?
|7600GS based?
|-
!Quadro FX 1600M
|Jun 1, 2007
|G84M
|80
|625
| rowspan="2" |1250
| rowspan="2" |1600
|32:16:8
|25.6
|5
|10
|120
| rowspan="2" |10.0
| rowspan="2" |3.3
|50?
|
|-
!Quadro FX 3600M
|Feb 23, 2008
|G92M
|65
|500
|64:32:16<br />96:48:16
|51.2
|256
|8<br />8
|16<br />24
|240<br />360
|70
|Based on the GeForce 8800M GTX. Dell Precision M6300 uses 64 shader version of the FX 3600M
|}
===Quadro FX (x700) series===
{{Further|Quadro}}
* <sup>1</sup>[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Shader clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]])<ref name="NVcomp1" /><ref name="NVcomp2" />
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Direct3D]]
! [[OpenGL]]
![[OpenCL]]
![[CUDA]]
|-
!Quadro FX 370
|Sep 12, 2007
|G84-825-A2
| rowspan="2" |80
| rowspan="2" |PCIe x16
|360
|720
|800
|16:8:4
| rowspan="2" |256
|6.4
| rowspan="5" |DDR2
| rowspan="2" |64
|1.44
|2.88
|34.56
| rowspan="8" | -
| rowspan="8" |10.0
| rowspan="8" |3.3
| rowspan="2" |1.1
| rowspan="2" |1.1
|35
|
|-
!Quadro FX 370 LP
|Nov 6, 2008
|G98
|540
|1300
|1000
|8:8:4
|8
|2.16
|4.32
|25.92
|25
|DMS-59 for two Single Link DVI
|-
!Quadro FX 470
|Sep 12, 2007
|MCP7A-U
|65
|PCIe 2.0 x16<br />(Integrated)
|580
|1400
|800<br />(system memory)
|16:8:4
|Up to 256 MiB from system memory.
| rowspan="3" |12.8
| rowspan="3" |128
|2.32
|4.64
|67.2
| -
| -
|30
|based on GeForce 9400 mGPU
|-
!Quadro FX 570
|Sep 12, 2007
|G84-850-A2
| rowspan="2" |80
| rowspan="2" |PCIe x16
| rowspan="2" |460
| rowspan="2" |920
| rowspan="2" |800
|16:8:8
|256
| rowspan="2" |3.68
|3.68
|44.1
| rowspan="5" |1.1
| rowspan="5" |1.1
|38
| rowspan="2" |
|-
!Quadro FX 1700
|Sep 12, 2007
|G84-875-A2
|32:16:8
| rowspan="2" |512
|7.36
|88.32
|42
|-
!Quadro FX 3700
|Jan 8, 2008
|G92-875-A2
| rowspan="3" |65
| rowspan="3" |PCIe 2.0 x16
|500
|1250
| rowspan="3" |1600
|112:56:16
|51.2
| rowspan="3" |GDDR3
|256
|8
|28
|420
|78
|Stereo display, [[Scalable Link Interface|SLI]]
|-
!Quadro FX 4700X2
|Apr 18, 2008
|2x G92-880-A2
|600
|1500
|2x 128:64:16
|2x 1024
|2x 51.2
|2x 256
|2x 9.6
|2x 38.4
|2x 576
|226
|[[Scalable Link Interface|SLI]]
|-
!Quadro VX 200
|Jan 8, 2008
|G92-851-A2
|450
|1125
|96:48:16
|512
|51.2
|256
|7.2
|21.6
|324
|75
|2x Dual-link DVI, S-Video, optimised for [[AutoCAD|Autodesk AutoCAD]]
|}
Quadro FX (x700M) series.
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Shader clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]])
! colspan=2 | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
|-
!Quadro FX 370M
|Aug 15, 2008
|G98M
|65
| rowspan="6" |PCIe 1.0 x16
|550
|1400
|1200
|8:4:4
|256
|9.6
| rowspan="6" |GDDR3
|64
|2.2
|2.2
|33.6
| rowspan="6" |10.0
| rowspan="6" |3.3
|20
|-
!Quadro FX 570M
|Jun 1, 2007
|G84M
|80
|475
|950
|1400
| rowspan="3" |32:16:8
| rowspan="4" |512
|22.4
| rowspan="3" |128
|3.8
|7.6
|91.2
|45
|-
!Quadro FX 770M
|Aug 14, 2008
| rowspan="2" |G96M
| rowspan="4" |65
|500
|1250
| rowspan="4" |1600
| rowspan="2" |25.6
|4
|8
|119.0
|35
|-
!Quadro FX 1700M
|Oct 1, 2008
|625
|1550
|5
|10
|148.8
|50
|-
!Quadro FX 2700M
|Aug 14, 2008
|G94M
| rowspan="2" |530
|1325
|48:24:16
| rowspan="2" |51.2
| rowspan="2" |256
|8.48
|12.72
|190.8
|65
|-
!Quadro FX 3700M
|Aug 14, 2008
|G92M
|1375
|128:64:16
|1024
|8.8
|35.2
|528
|75
|}
===Quadro FX (x800) series===
{{Further|Quadro}}
* <sup>1</sup>[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Shader clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]])<ref name="NVcomp1" /><ref name="NVcomp2" />
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Direct3D]]
! [[OpenGL]]
![[OpenCL]]
![[CUDA]]
|-
!Quadro FX 380
|Mar 30, 2009
|G96-850-C1
|65
| rowspan="8" |PCIe 2.0 x16
|450
|1100
|1400
|16:8:8
|256
|22.4
| rowspan="8" |GDDR3
|128
|3.6
|3.6
|52.8
| rowspan="4" | -
| rowspan="8" |10.0
| rowspan="8" |3.3
| rowspan="8" |1.1
|1.1
|34
|Two Dual Link DVI, no DisplayPort
|-
!Quadro FX 380 LP
|Dec 1, 2009
|GT218GL
|40
|589
|1402
| rowspan="7" |1600
|16:8:4
| rowspan="2" |512
|12.8
|64
|2.356
|4.712
|67.296
|1.2
|28
|DisplayPort, Dual Link DVI
|-
!Quadro FX 580
|Apr 9, 2009
|G96-875-C1
| rowspan="2" |65
|450
|1125
|32:16:8
|25.6
|128
|3.6
|7.2
|108
| rowspan="2" |1.1
|40
|Dual DisplayPort, Dual Link DVI
|-
!Quadro FX 1800
|Mar 30, 2009
|G94-876-B1
|550
|1375
|64:32:12
|768
|38.4
|192
|6.6
|17.6
|264
|59
| rowspan="4" |Stereo DP Dual Link DVI, Dual DisplayPort, [[Scalable Link Interface|SLI]]
|-
!Quadro FX 3800
|Mar 30, 2009
|G200-835-B3 + NVIO2-A2
| rowspan="4" |55
|600
| rowspan="2" |1204
|192:64:16
|1024
|51.2
|256
|9.632
|38.528
|691.2
|86.4
| rowspan="4" |1.3
|108
|-
!Quadro FX 4800
|Nov 11, 2008
|G200-850-B3 + NVIO2-A2
|602
|192:64:24
|1536
|76.8
|384
|14.448
|38.528
|693.504
|86.688
|150
|-
!Quadro FX 5800
|Nov 11, 2008
|G200-875-B2 + NVIO2-A2
|610
|1296
|240:80:32
|4096
|102.4
|512
|20.736
|51.840
|878.4
|109.8
|189
|-
!Quadro CX<ref>{{cite web |url=http://www.nvidia.com/object/product_quadro_cx_us.html |title=Nvidia Quadro CX is the accelerator for Adobe Creative Suite 4 |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222092543/http://www.nvidia.com/object/product_quadro_cx_us.html |archive-date=2015-12-22 |url-status=live }}</ref>
|Nov 11, 2008
|GT200GL + NVIO2
|602
|1204
|192:64:24
|1536
|76.8
|384
|14.448
|38.528
|693.504
|86.688
|150
|Display Port and dual-link DVI Output, optimised for [[Cs4|Adobe Creative Suite 4]]
|}
The last DirectX 10 based Quadro mobile cards.
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Shader clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]])
! colspan=2 | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
|-
! Size ([[Mebibyte|MiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
|-
!Quadro FX 380M
|Jan 7, 2010
|GT218M
| rowspan="3" |40
| rowspan="5" |PCIe 2.0 x16
|625
|1530
| rowspan="2" |1600
|16:8:4
|512
|12.8
| rowspan="2" |GDDR3
|64
|2.5
|5
|73.44
| rowspan="3" |10.1
| rowspan="5" |3.3
|25
|-
!Quadro FX 880M
|Jan 7, 2010
|GT216M
|550
|1210
|48:16:8
| rowspan="4" |1024
|25.6
| rowspan="2" |128
|4.4
|8.8
|174.24
|35
|-
!Quadro FX 1800M
|Jun 15, 2009
|GT215M
|450
|1080
|1600<br />2200
|72:24:8
|25.6<br />35.2
|GDDR3<br />GDDR5
|3.6
|10.8
|233.28
|45
|-
!Quadro FX 2800M
|Dec 1, 2009
| rowspan="2" |G92M
| rowspan="2" |55
|500
|1250
| rowspan="2" |2000
|96:48:16
| rowspan="2" |64
| rowspan="2" |GDDR3
| rowspan="2" |256
|8
|16
|360
| rowspan="2" |10.0
|75
|-
!Quadro FX 3800M
|Aug 14, 2008
|675
|1688
|128:64:16
|10.8
|43.2
|648.192
|100
|}
===Quadro x000 series===
{{Further|Quadro}}
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
* <sup>4</sup> Each SM in the Fermi architecture contains 4 texture filtering units for every texture address unit. Total for the full GF100 64 texture address units and 256 texture filtering units<ref name="anandtech.com"/>
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Shader clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]])<ref name="NVcomp1" /><ref name="NVcomp2" />
! colspan="4" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Direct3D]]
! [[OpenGL]]
![[OpenCL]]
![[CUDA]]
|-
!Quadro 400
|Apr 5, 2011
|GT216GL
| rowspan="8" |40
| rowspan="8" |PCIe 2.0 x16
|450
|1125
|1540
|48:16:4
|0.5
|12.3
| rowspan="2" |DDR3
|64
|1.8
|7.2
|108
| -
|10.1
|4.5
| rowspan="8" |1.1
|1.2
|32
| rowspan="2" |DisplayPort, Dual Link DVI
|-
!Quadro 600
|Dec 13, 2010
|GF108GL
|640
|1280
|1600
|96:16<sup>4</sup>:4
| rowspan="2" |1
|25.6
| rowspan="2" |128
|2.56
|10.24
|245.76
|15
| rowspan="7" |11.0
| rowspan="7" |4.6
| rowspan="2" |2.1
|40
|-
!Quadro 2000
|Dec 24, 2010
|GF106GL (GF106-875)
|625
|1250
|2600
|192:32<sup>4</sup>:16
|41.6
| rowspan="6" |GDDR5
|10
|20
|480
|30
|62
|Stereo DP Dual Link DVI, Dual DisplayPort
|-
!Quadro 4000
|Nov 2, 2010
| rowspan="3" |GF100
|475
|950
|2800
|256:32<sup>4</sup>:32
|2
|89.6
|256
|15.2
|15.2
|486.4
|243
| rowspan="5" |2.0
|142
| rowspan="5" |
|-
!Quadro 5000
|Feb 23, 2011
|513
|1026
| rowspan="2" |3000
|352:44<sup>4</sup>:40
|2.5
|120
|320
|20.53
|22.572
|722.304
|359
|152
|-
!Quadro 6000
|Dec 10, 2010
|574
|1148
|448:56<sup>4</sup>:48
|6
|144
| rowspan="2" |384
|27.552
|32.144
|1028.608
|515
| rowspan="2" |204
|-
!Quadro 7000
|May 2, 2012
|GF110
|651
|1301
|3696
|512:64<sup>4</sup>:48
|6
|177
|31.248
|41.7
|1332
|667
|-
!Quadro Plex 7000<ref>{{cite web |url=https://www.techpowerup.com/gpudb/902/quadro-plex-7000 |title=NVIDIA Quadro Plex 7000 Specs |access-date=2018-03-26 }}{{dead link|date=June 2022|bot=medic}}{{cbignore|bot=medic}}</ref>
|July 25, 2011
|2x GF100
|574
|1148
|3000
|2x 512:64<sup>4</sup>:48
|2x 6
|2x 144
|2x 384
|2x 18.37
|2x 36.74
|2x 1176
|2x 588
|600
|}
Mobile version of the Quadro x000 series.
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Shader clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>12</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]])
! colspan=2 | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
|-
!Quadro 500M
|Feb 22, 2011
| rowspan="2" |GF108
| rowspan="7" |40
| rowspan="7" |PCIe 2.0 x16
| rowspan="2" |700
| rowspan="2" |1400
| rowspan="3" |1800
| rowspan="2" |96:16:4
|1
| rowspan="3" |28.8
| rowspan="3" |DDR3
| rowspan="3" |128
| rowspan="2" |2.8
| rowspan="2" |11.2
|268.8
| rowspan="7" |11.0
| rowspan="7" |4.5
|35
|
|-
!Quadro 1000M
|Jan 13, 2011
| rowspan="5" |2
|
|45
|Dell Precision M4600
|-
!Quadro 2000M
|Jan 13, 2011
|GF106
|550
|1100
|192:32:16
|8.8
|17.6
|422.4
|55
|Dell Precision M4600
|-
!Quadro 3000M
|Feb 22, 2011
| rowspan="2" |GF104
|450
|900
| rowspan="2" |2500
|240:40:32
| rowspan="2" |80
| rowspan="4" |GDDR5
| rowspan="4" |256
|14.4
|18
|432
|75
|Dell Precision M6600
|-
!Quadro 4000M
|Feb 22, 2011
|475
|950
|336:56:32
|15.2
|26.6
|638.4
| rowspan="3" |100
|Dell Precision M6600
|-
!Quadro 5000M
|Jul 27, 2010
|GF100
|405
|810
|2400
|320:40:32
|76.8
|12.96
|16.2
|518.4
|Dell Precision M6500
|-
!Quadro 5010M
|Feb 22, 2011
|GF110
|450
|900
|2600
|384:48:32
|4
|83.2
|14.4
|21.6
|691.2
|Dell Precision M6600
|}
===Quadro Kxxx series===
{{Further|Quadro|Kepler (microarchitecture)}}
* <sup>1</sup>[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Shader clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]])<ref name="NVcomp1" /><ref name="NVcomp2" />
! colspan=4 | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Direct3D]]
! [[OpenGL]]
! [[Vulkan]]
! [[CUDA]]
|-
!Quadro 410
|Aug 7, 2012
| rowspan="4" |GK107
| rowspan="7" |28
| rowspan="6" |PCIe 2.0 x16
|706
|706
|1800
|192:16:8<br />(1 SMX)
|0.5
|14.4
|DDR3<ref>{{cite web |url=http://www.nvidia.com/object/quadro-410-graphics-card.html#pdpContent=2 |title=Quadro 410 Entry Level Professional Graphics Card |publisher=Nvidia |date=2010-12-08 |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20151222112735/http://www.nvidia.com/object/quadro-410-graphics-card.html#pdpContent=2 |archive-date=2015-12-22 |url-status=live }}</ref>
|64
|5.65
|11.3
|271.10
| rowspan="5" |
| rowspan="7" |11.0
| rowspan="7" |4.6
| rowspan="7" |1.2
| rowspan="6" |3.0
|38
|
|-
!Quadro K600
|Mar 1, 2013
|876
|876
|891<br />(1782)
|192:16:16<br />(1 SMX)
|1
|28.5
|DDR3
| rowspan="3" |128
|14.0
|14.0
|336.38
|41
| 6.3" Card
|-
!Quadro K2000
|Mar 1, 2013
| rowspan="2" |954
| rowspan="2" |954
| rowspan="2" |1000<br />(4000)
|384:32:16<br />(2 SMX)
|2
| rowspan="2" |64
| rowspan="5" |GDDR5
| rowspan="2" |15.2
| rowspan="2" |30.5
|732.67
| rowspan="2" |51
| rowspan="2" | 7.97" Card
|-
!Quadro K2000D
|Mar 1, 2013
|384:32:16<br />(2 SMX)
|2
|
|-
!Quadro K4000
|Mar 1, 2013
|GK106
|810.5
|810.5
|1404<br />(5616)
|768:64:24<br />(4 SMX)
|3
|134.8
|192
|19.4
|51.9
|1244.93
|80
| 9.5" Card
|-
!Quadro K5000
|Aug 17, 2012
|GK104
|706
|706
|1350<br />(5400)
|1536:128:32<br />(8 SMX)
|4
|172.8
|256
|22.6
|90.4
|2168.83
|90.4
|122
| rowspan="2" | 10.5" Card
|-
!Quadro K6000
|Jul 23, 2013
|GK110
|PCIe 3.0 x16
|901.5
|901.5
|1502<br />(6008)
|2880:240:48<br />(15 SMX)
|12
|288
|384
|54.1
|216
|5196
|1732
|3.5
|225
|}
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Shader clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]])<ref name="NVcomp1" /><ref name="NVcomp2" />
! colspan=4 | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Direct3D]]
! [[OpenGL]]
! [[Vulkan]]
! [[CUDA]]
|-
!Quadro K420
|Jul 22, 2014
|GK107
| rowspan="6" |28
| rowspan="5" |PCIe 2.0 x16
|780
|780
|1800
|192:16:16<br />(1 SMX)
|1 2<ref>{{cite web |url=http://www.pny-europe.com/en/consumer/explore-all-products/nvidia-quadro/580-nvidia-quadro-k420-2gb |title=NVIDIA Quadro K420 2GB |access-date=2018-09-03 |archive-url=https://web.archive.org/web/20180903182948/http://www.pny-europe.com/en/consumer/explore-all-products/nvidia-quadro/580-nvidia-quadro-k420-2gb |archive-date=2018-09-03 |url-status=live }}</ref>
|29
| rowspan="2" |DDR3
| rowspan="4" |128
|12.48
|12.48
|299.52
|12.48
|11.0
| rowspan="6" |4.6
|1.2
|3.0
|41
|
|-
!Quadro K620
|Jul 22, 2014
|GM107-850
|1000
|1000
|900<br />(1800)
|384:24:16<br />(3 SMM)
|2
|28.8
|16.0
|24.0
|768.0
|24.0
| rowspan="3" |12.0
| rowspan="3" |1.3
| rowspan="3" |5.0
| rowspan="2" |45
| 6.3" Card
|-
!Quadro K1200
|Jan 28, 2015
|GM107-860
|954
|
|1253
|512:32:16<br />(4 SMM)
| rowspan="3" |4
|80.2
| rowspan="4" |GDDR5
|15.3
|30.5
|1083
|
| rowspan="2" | 7.97" Card
|-
!Quadro K2200
|Jul 22, 2014
|GM107-875-A2<ref>{{cite web|url=https://www.renderosity.com/nvidia-quadro-k2200-graphics-card-in-review-cms-17458/|title=NVIDIA Quadro K2200 Graphics Card in Review|access-date=2020-08-16|archive-date=2020-08-15|archive-url=https://web.archive.org/web/20200815035254/https://www.renderosity.com/nvidia-quadro-k2200-graphics-card-in-review-cms-17458|url-status=dead}}</ref>
|1046
|1046
|1253<br />(5012)
|640:40:16<br />(5 SMM)
|80.2
|16.7
|41.8
|1338.9
|41.8
|68
|-
!Quadro K4200
|Jul 22, 2014
|GK104
|780
|780
|1350<br />(5400)
|1344:112:32<br />(7 SMX)
|172.8
| rowspan="2" |256
|24.96
|87.36
|2096.64
|87.36
| rowspan="2" |11.0
| rowspan="2" |1.2
|3.0
|105
| 9.5" Card
|-
!Quadro K5200
|Jul 22, 2014
|GK110B
|PCIe 3.0 x16
|650
|650
|1500<br />(6000)
|2304:192:32<br />(12 SMX)
|8
|192
|20.8
|124.8
|2995.2
|124.8
|3.5
|150
| 10.5" Card
|}
Mobile version of the Quadro (Kxxx) series.
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Shader clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]])
! colspan=2 | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Nvidia Optimus]]<br />technology
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
|-
!Quadro K500M
|Jun 1, 2012
| rowspan="3" |GK107
| rowspan="6" |28
| rowspan="6" |PCIe 3.0 x16
| rowspan="2" |850
| rowspan="2" |850
|1600
|192:16:8
|1
|12.8
| rowspan="3" |DDR3
|64
|6.8
| rowspan="2" |13.6
|326.4
| rowspan="6" |11.0
| rowspan="6" |4.5
| rowspan="6" |Yes
|35
|
|-
!Quadro K1000M
|Jun 1, 2012
| rowspan="2" |1800
|192:16:16
| rowspan="3" |2
| rowspan="2" |28.8
| rowspan="2" |128
|13.6
|326.4
|45
|Dell Precision M4700
|-
!Quadro K2000M
|Jun 1, 2012
|745
|745
|384:32:16
|11.92
|23.84
|572.16
|55
|Dell Precision M4700
|-
!Quadro K3000M
|Jun 1, 2012
| rowspan="3" |GK104
|654
|654
| rowspan="2" |2800
|576:48:32
| rowspan="2" |89.6
| rowspan="3" |GDDR5
| rowspan="3" |256
|20.93
|31.39
|753.41
|75
|Dell Precision M6700
|-
!Quadro K4000M
|Jun 1, 2012
|600
|600
|960:80:32
| rowspan="2" |4
|19.2
|48
|1152
| rowspan="2" |100
|Dell Precision M6700
|-
!Quadro K5000M
|Aug 7, 2012
|706
|706
|3000
|1344:112:32
|96
|22.59
|79.07
|1897.73
|Dell Precision M6700
|}
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Shader clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]])
! colspan=2 | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Nvidia Optimus]]<br />technology
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan=2 | Notes
|-
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
|-
!Quadro K510M
|Jul 23, 2013
| rowspan="2" |GK208
| rowspan="7" |28
|PCIe 3.0 x8
|850
|850
|1200<br />(2400)
|192:16:8<br />(1 SMX)
| rowspan="2" |1
|19.2
| rowspan="7" |GDDR5
| rowspan="2" |64
|6.8
|13.6
|326.4
| rowspan="7" |11.0
| rowspan="7" |4.5
| rowspan="7" |Yes
| rowspan="2" |30
| rowspan="2" |
|-
!Quadro K610M
|Jul 23, 2013
| rowspan="6" |PCIe 3.0 x16
|980
|980
|1300<br />(2600)
|192:16:8<br />(1 SMX)
|20.8
|7.84
|15.68
|376.32
|-
!Quadro K1100M
|Jul 23, 2013
|GK107
|716
|716
|1400<br />(2800)
|384:32:16<br />(2 SMX)
| rowspan="2" |2
|44.8
| rowspan="2" |128
|11.45
|22.91
|549.89
|45
|Dell Precision M3800 and M4800
|-
!Quadro K2100M
|Jul 23, 2013
|GK106
|654
|654
|1500<br />(3000)
|576:48:16<br />(3 SMX)
|48.0
|10.46
|31.39
|753.41
|55
|Dell Precision M4800
|-
!Quadro K3100M
|Jul 23, 2013
| rowspan="3" |GK104
|680
|680
| rowspan="2" |800<br />(3200)
|768:64:32<br />(4 SMX)
| rowspan="2" |4
| rowspan="2" |102.4
| rowspan="3" |256
|21.76
|43.52
|1044.48
|75
|Dell Precision M6800
|-
!Quadro K4100M
|Jul 23, 2013
|706
|706
|1152:96:32<br />(6 SMX)
|22.59
|67.77
|1626.624
| rowspan="2" |100
|Dell Precision M6800
|-
!Quadro K5100M
|Jul 23, 2013
|771
|771
|900<br />(3600)
|1536:128:32<br />(8 SMX)
|8
|115.2
|24.67
|98.68
|2368.51
|Dell Precision M6800
|}
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan="2" |Boost
clock
([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]])
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Nvidia Optimus]]<br />technology
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
|-
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
![[Double-precision floating-point format|Double]]
[[Double-precision floating-point format|precision]]
! [[Direct3D]]
! [[OpenGL]]
![[Vulkan (API)|Vulkan]]
![[OpenCL]]
![[CUDA]]
|-
!Quadro K2200M
|Jul 19, 2014
|GM107
|28
|PCIe 3.0 x16
|1150
|1150
|1253<br />(5012)
|640:40:16<br />(5 SMM)
|2
|80.2
|GDDR5
|128
|18.4
|46
|1472
|46
|12.1
|4.6
|1.3
|3.0
|5.0
|Yes
|65
|}
===Quadro Mxxx series===
{{Further|Quadro|Maxwell (microarchitecture)}}
* <sup>1</sup>[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s: streaming multiprocessors
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab
([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock
([[Hertz|MHz]])
! rowspan=2 | Shader clock
([[Hertz|MHz]])
! rowspan=2 | Memory clock
([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan="2" |[[GPU cache|Cache]]
! colspan="4" | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]])<ref name="NVcomp1" /><ref name="NVcomp2" />
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan="2" |Release price (USD)
! rowspan="2" | Notes
|-
!L1/SM ([[Kibibyte|KiB]])
!L2<br />([[Mebibyte|MiB]])
! Size
([[Gibibyte|GiB]])
! Bandwidth
([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width
([[bit]])
!Pixel
([[Pixel|GP]]/s)
!Texture
([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Direct3D]]
! [[OpenGL]]
![[Vulkan (API)|Vulkan]]
![[OpenCL]]
![[CUDA]]
|-
!Quadro M2000
|Apr 8, 2016
|GM206-875-A1<ref name="coolpc.com.tw">{{cite web|url=http://www.coolpc.com.tw/phpBB2/viewtopic.php?p=584707/|title=【開箱】搭載24GB記憶體!?NVIDIA Quadro之最M6000 專業繪圖卡登場!|access-date=2020-08-16}}</ref>
| rowspan="4" |28
| rowspan="4" |PCIe 3.0 x16
|796
|1163
|1653<br />(6612)
|768:48:32:6
| rowspan="4" |48
|1
|4
|105.8
| rowspan="4" |GDDR5
|128
|37.8
|56.6
|1812.5
|56.6
| rowspan="4" |12.1
| rowspan="4" |4.6
| rowspan="4" |1.3
| rowspan="4" |1.2
| rowspan="4" |5.2
|75
|$438
| rowspan="2" |Four DisplayPort 1.2a
|-
!Quadro M4000
|Jun 29, 2015
|GM204-850-A1<ref name="【開箱】術有專攻!NVIDIA Quadro Maxwell 專業繪圖">{{cite web|url=http://www.coolpc.com.tw/phpBB2/viewtopic.php?p=550307/|title=【開箱】術有專攻!NVIDIA Quadro Maxwell 專業繪圖卡上市!|access-date=2020-08-16}}</ref>
|773
|773
|1503<br />(6012)
|1664:104:64:13
| rowspan="2" |2
| rowspan="2" |8
|192.4
| rowspan="2" |256
|51.2
|83.2
|2662.4
|83.2
|120
|$791
|-
!Quadro M5000
|Jun 29, 2015
|GM204-875-A1<ref name="【開箱】術有專攻!NVIDIA Quadro Maxwell 專業繪圖"/>
|861
|1038
|1653<br />(6612)
|2048:128:64:16
|211.6
|67.2
|134.4
|4300.8
|134.4
|150
|$2857
| rowspan="2" |Four DisplayPort 1.2a, One DVI-I
|-
!Quadro M6000
|Mar 21, 2015
|GM200GL<br />GM200-880-A1<ref name="coolpc.com.tw"/>
|988
|1114
|1653<br />(6612)
|3072:192:96:24
|3
|12<br />24
|317
|384
|106.9
|213.9
285.2
|6070
|190
|250<ref>{{cite web |url=http://images.nvidia.com/content/pdf/quadro/data-sheets/NV_DS_Quadro_M6000_FEB15_NV_US_FNL_HR.pdf |title=Real Interactive Expression: Nvidia Quadro M6000 |website=Images.nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20160304060552/http://images.nvidia.com/content/pdf/quadro/data-sheets/NV_DS_Quadro_M6000_FEB15_NV_US_FNL_HR.pdf |archive-date=2016-03-04 |url-status=live }}</ref>
|$4200
$4999
|}
Mobile version of the Quadro (Mxxxx) series.
* <sup>1</sup>[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan="2" |Boost
clock
([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! colspan="2" | Processing power ([[GFLOPS]])
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Nvidia Optimus]]<br />technology
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
|-
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
![[Double-precision floating-point format|Double]]
[[Double-precision floating-point format|precision]]
! [[Direct3D]]
! [[OpenGL]]
![[Vulkan (API)|Vulkan]]
![[OpenCL]]
![[CUDA]]
|-
!Quadro M500M
|Apr 27, 2016
|GM108
| rowspan="7" |28
| rowspan="7" |PCIe 3.0 x16
|1029
|1124
|900 (1800)
|384:24:8<br />(3 SMM)
| rowspan="3" |2
|14.40
|[[DDR3 SDRAM|DDR3]]
|64
|8.992
|17.98
|863.2
|26.98
| rowspan="7" |12.1
| rowspan="7" |4.6
| rowspan="7" |1.3
| rowspan="7" |3.0
| rowspan="4" |5.0
| rowspan="7" |Yes
|25
|-
!Quadro M600M
|Aug 18, 2015
| rowspan="3" |GM107
|837
|876
| rowspan="6" |1253<br />(5012)
|384:24:16<br />(3 SMM)
| rowspan="3" |80.2
| rowspan="6" |[[GDDR5 SDRAM|GDDR5]]
| rowspan="3" |128
|7.008
|14.02
|672.8
|21.02
|30
|-
!Quadro M1000M
|Aug 18, 2015
|993
|
|512:32:16<br />(4 SMM)
|15.89
|31.78
|1017
|31.78
|40
|-
!Quadro M2000M
|Dec 3, 2015
|1029
|1098
|640:40:16<br />(5 SMM)
| rowspan="3" |4
|17.57
|43.92
|1405
|43.92
|55
|-
!Quadro M3000M
|Aug 18, 2015
| rowspan="3" |GM204
|1050
|
|1024:64:32<br />(8 SMM)
| rowspan="3" |160.4
| rowspan="3" |256
|33.60
|67.20
|2150
|67.20
| rowspan="3" |5.2
|75
|-
!Quadro M4000M
|Aug 18, 2015
|975
|
|1280:80:48<br />(10 SMM)
|62.40
|78.00
|2496
|78.00
| rowspan="2" |100
|-
!Quadro M5000M
|Aug 18, 2015
|962
|
|1536:96:64<br />(12 SMM)
|8
|62.40
|93.60
|2955.3
|93.60
|}
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Boost clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! rowspan="2" |[[GPU cache|L2]]
[[GPU cache|Cache]] ([[Mebibyte|MiB]])
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]])
! colspan=3 | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Nvidia Optimus]]<br />technology
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
|-
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
! [[CUDA]]
|-
!Quadro M520 Mobile
|Jan 11, 2017
|GM108
| rowspan="4" |28
| rowspan="4" |PCIe 3.0 x16
|965
|1176
|5000
|384:24:8<br />(3 SMM)
|1
|1
|40
| rowspan="4" |GDDR5
|64
|9.4
|18.8
|840
| rowspan="4" |12.1
| rowspan="4" |4.5
| rowspan="3" |5.0
| rowspan="4" |Yes
|25
|-
!Quadro M620 Mobile
|Jan 11, 2017
| rowspan="2" |GM107
|756
|1018
| rowspan="2" |5012
|512:32:16<br />(4 SMM)
|2
|2
| rowspan="2" |80.2
| rowspan="3" |128
|16.3
|32.6
|1000
|30
|-
!Quadro M1200 Mobile
|Jan 11, 2017
|991
|1148
|640:40:16<br />(5 SMM)
|2
| rowspan="2" |4
|18.4
|45.9
|1400
|45
|-
!Quadro M2200 Mobile
|Jan 11, 2017
|GM206
|695
|1037
|5508
|1024:64:32<br />(8 SMM)
|1
|88.1
|33.2
|66.3
|2100
|5.2
|55
|}
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Boost clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! rowspan="2" |[[GPU cache|L2]]
[[GPU cache|Cache]] ([[Mebibyte|MiB]])
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]])
! colspan=3 | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Nvidia Optimus]]<br />technology
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
|-
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
! [[CUDA]]
|-
!Quadro M5500 Mobile
|Apr 8, 2016
|GM204
|28
|PCIe 3.0 x16
|861 ||1140 ||6606
|2048:128:64<br />(16 SMM)
|2
|8
|211.4
|GDDR5
|256
|73
|145.9
|4669
|12.1
|4.5
|5.2
|Yes
|150
|}
===Quadro Pxxx series===
{{Further|Quadro|Pascal (microarchitecture)}}
* <sup>1</sup>[[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s: streaming multiprocessors
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]<ref name="NVdevsuplist">{{cite web |url=https://developer.nvidia.com/video-encode-decode-gpu-support-matrix |title=Video Encode and Decode GPU Support Matrix |publisher=Nvidia |access-date=2017-05-07 |archive-url=https://web.archive.org/web/20170710130550/https://developer.nvidia.com/video-encode-decode-gpu-support-matrix |archive-date=2017-07-10 |url-status=live }}</ref>
! rowspan=2 | Fab
([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock
([[Hertz|MHz]])
! rowspan=2 | Boost clock
([[Hertz|MHz]])
! rowspan=2 | Memory clock
([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan="2" |[[GPU cache|Cache]]
! colspan="4" | Memory
! colspan="2" |[[Fillrate]]<ref name="GPUdb">{{cite web |title=GPU Database |url=https://www.techpowerup.com/gpudb/ |access-date=2017-05-07 |publisher=techPowerUp}}{{dead link|date=June 2022|bot=medic}}{{cbignore|bot=medic}}</ref>
! colspan="2" | Processing power ([[GFLOPS]])<ref name="NVcomp1" /><ref name="NVcomp2" />
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
! rowspan="2" |Release price (USD)
! rowspan="2" | Notes
|-
!L1/SM ([[Kibibyte|KiB]])
!L2<br />([[Mebibyte|MiB]])
! Size
([[Gibibyte|GiB]])
! Bandwidth
([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width
([[bit]])
!Pixel
([[Pixel|GP]]/s)
!Texture
([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Double precision floating-point format|Double precision]]
! [[Direct3D]]
! [[OpenGL]]
![[Vulkan (API)|Vulkan]]
![[OpenCL]]
![[CUDA]]
|-
!Quadro P400
|Feb 7, 2017
|GP107-825
|rowspan=4 | 14
| rowspan="10" | PCIe 3.0 x16
|1228
|1252
|1003<br />(4012)
|256:16:16:2
| rowspan="9" |48
|0.5
| rowspan="3" |2
|32
| rowspan="5" |GDDR5
|64
|17.1
|17.1
|641
|20.0
| rowspan="10" |12.1
| rowspan="10" |4.6
| rowspan="10" |1.3
| rowspan="10" |1.2
| rowspan="9" |6.1
|30
|$120
|Three Mini-DisplayPort 1.4
|-
!Quadro P600
|Feb 7, 2017
|GP107-850
|1329
|1557
|1003<br />(4012)
|384:24:16:3
| rowspan="3" |1
| rowspan="2" |64
| rowspan="3" |128
|21.7
|32.5
|1195
|37.3
| rowspan="2" |40
|$178
| rowspan="3" |Four Mini-DisplayPort 1.4
|-
!Quadro P620
|Feb 1, 2018
|GP107-855
| rowspan="2" |1266
|1354
|1003<br />(4012)
|512:32:16:4
|23.3
|46.6
|1490
|46.6
|
|-
!Quadro P1000
|Feb 7, 2017
|GP107-860
|1481
|1752<br />(7008)
|640:40:32:5
|4
|82
|43.3
|54.2
|1894
|59.2
|47
|$375
|-
!Quadro P2000
|Feb 6, 2017
|GP106-875
| rowspan="6" | 16
|1076
|1480
|2002<br />(8008)
|1024:64:40:8
| rowspan="2" |1.25
|5
|140
| rowspan="2" |160
|54.8
|87.7
|3010
|93.8
| rowspan="2" |75
|$585
| rowspan="3" |Four DisplayPort 1.4
|-
!Quadro P2200
|Jun 10, 2019
|GP106-880-K1-A1<ref>{{cite web|url=http://www.coolpc.com.tw/phpBB2/viewtopic.php?p=679224/|title=【開箱】專業設計領域,有你有我!麗臺NVIDIA Quadro P2200 繪圖卡新品上市。|access-date=2020-08-16}}</ref>
|1000
|1493
|1251
(10008)
|1280:80:40:9
|5
|200
|GDDR5X
|59.7
|119.4
|3822
|121.3
|
|-
!Quadro P4000
|Feb 6, 2017
|GP104-850-A1
|1202
|1480
|1901<br />(7604)
|1792:112:64:14
| rowspan="2" |2
|8
|243
|GDDR5
| rowspan="2" |256
|78.5
|137.4
|5300
|165.6
|105
|$815
|-
!Quadro P5000
|Oct 1, 2016
|GP104-875-A1
|1607
|1733
|1126<br />(9008)
|2560:160:64:20
|16
|288
| rowspan="2" |GDDR5X
|102.8
|257.1
|8873
|277.3
|180
|$2499
| rowspan="2" |Four DisplayPort 1.4, One DVI-D
|-
!Quadro P6000
|Oct 1, 2016
|GP102-875-A1
|1506
|1645
|1126<br />(9008)
|3840:240:96:30
|3
|24
|432
|384
|136.0
|340.0
|10882 (11758)
|~340
|250
|$5999
|-
!Quadro GP100
|Oct 1, 2016
|GP100-876-A1
|1304
|1442
|703 (1406)
|3584:224:128:56
|24
|4
|16
|720
|HBM2
|4096
|184.7
|323
|10336
|5168
|6.0
|235
|
|NVLINK support
|}
Mobile version of the Quadro (Px000) series series.
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan="2" |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Boost clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! rowspan="2" |[[GPU cache|L2]]
[[GPU cache|Cache]] ([[Mebibyte|MiB]])
! colspan=4 | Memory
! colspan="2" |[[Fillrate]]
! Processing power ([[GFLOPS]])
! colspan=3 | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Nvidia Optimus]]<br />technology
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
|-
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
! [[CUDA]]
|-
!Quadro P500 Mobile
|Jan 5, 2018
|GP108
| rowspan="4" |14
| rowspan="8" |PCIe 3.0 x16
|1455
|1519
|1253
|256:16:16:2
|0.5
|2
|40
| rowspan="8" |GDDR5
|64
|24.3
|24.3
|750
| rowspan="8" |12.1
| rowspan="8" |4.5
| rowspan="8" |6.1
| rowspan="8" |Yes
|18
|-
!Quadro P600 Mobile
|Feb 7, 2017
|GP107
|1430
|1620
|1252
|384:24:16:3
| rowspan="3" |1
|2
|80
| rowspan="3" |128
|25.92
|38.88
|1200
|25
|-
!Quadro P1000 Mobile
|Feb 7, 2017
|GP107(N18P-Q1-A1)
|1493
|1519
|1502
|512:32:16:4
| rowspan="2" |4
| rowspan="2" |96
|24.3
|48.61
|1600
|40
|-
!Quadro P2000 Mobile
|Feb 6, 2017
|GP107(N18P-Q3-A1)
|1557
|1607
|1502
|768:64:32:6
|51.42
|77.14
|2400
|50
|-
!Quadro P3000 Mobile
|Jan 11, 2017
|GP106
| rowspan="4" |16
|1088
|1215
|1752
|1280:80:48:10
|1.5
|6
|168
|192
|58.32
|97.2
|3098
|75
|-
!Quadro P4000 Mobile
|Jan 11, 2017
| rowspan="3" |GP104
|1202
| rowspan="2" |1228
|1500
| rowspan="2" |1792:112:64:14
| rowspan="3" |2
| rowspan="2" |8
| rowspan="3" |192
| rowspan="3" |256
| rowspan="2" |78.59
| rowspan="2" |137.5
|4398
|100
|-
!Quadro P4000
Max-Q
|Jan 11, 2017
|1114
|1502
|
|80
|-
!Quadro P5000 Mobile
|Jan 11, 2017
|1164
|1506
|1500
|2048:128:64:16
|16
|96.38
|192.8
|6197
|100
|}
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 |Launch
! rowspan=2 | [[Code name]]
! rowspan=2 | Fab ([[Nanometer|nm]])
! rowspan=2 | [[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan=2 | Core clock ([[Hertz|MHz]])
! rowspan=2 | Boost clock ([[Hertz|MHz]])
! rowspan=2 | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! rowspan="2" |[[GPU cache|L2]]
[[GPU cache|Cache]] ([[Mebibyte|MiB]])
! colspan=4 | Memory
! colspan=2 | [[Fillrate]]
! Processing power ([[GFLOPS]])
! colspan=3 | Supported [[Application programming interface|API]] version
! rowspan=2 | [[Nvidia Optimus]]<br />technology
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
|-
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
! Pixel ([[Pixel|GP]]/s)
! Texture ([[Texel (graphics)|GT]]/s)
! [[Single precision floating-point format|Single precision]]
! [[Direct3D]]
! [[OpenGL]]
! [[CUDA]]
|-
!Quadro P3200 Mobile
|Feb 21, 2018
| rowspan="5" |GP104
| rowspan="5" |16
| rowspan="5" |PCIe 3.0 x16
|1328
|1543
|1752
|1792:112:64:14
|1.5
|6
|168.2
| rowspan="5" |GDDR5
|192
|98.75
|172.8
|5530
| rowspan="5" |12.1
| rowspan="5" |4.5
| rowspan="5" |6.1
| rowspan="5" |Yes
|75
|-
!Quadro P4200 Mobile
|Feb 21, 2018
|1418
|1594
| rowspan="2" |1753
| rowspan="2" |2304:144:64:18
| rowspan="4" |2
| rowspan="2" |8
| rowspan="2" |224.4
| rowspan="4" |256
|102.0
|229.5
|7345
|100
|-
!Quadro P4200 Max-Q
|Feb 21, 2018
|1215
|1480
|94.72
|213.1
|6820
|100
|-
!Quadro P5200 Mobile
|Feb 21, 2018
|1582
|1759
| rowspan="2" |1804
| rowspan="2" |2560:160:64:20
| rowspan="2" |16
| rowspan="2" |230.9
|112.6
|281.4
|9006
|100
|-
!Quadro P5200 Max-Q
|Feb 21, 2018
|1240
|1480
|94.72
|236.8
|7578
|100
|}
===Quadro GVxxx series===
{{Further|Quadro|Volta (microarchitecture)}}
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s: streaming multiprocessors: tensor cores
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan="2" | Model
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]<ref name="NVdevsuplist" />
! rowspan="2" | Fab
([[Nanometer|nm]])
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" | Core clock
([[Hertz|MHz]])
! rowspan="2" | Boost clock
([[Hertz|MHz]])
! rowspan="2" | Memory clock
([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan="2" |[[GPU cache|Cache]]
! colspan="4" | Memory
! colspan="2" |[[Fillrate]]<ref name="GPUdb" />
! colspan="2" | Processing power (T[[FLOPS]])
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan="2" |[[Thermal design power|TDP]] (Watts)
! rowspan="2" | Notes
|-
!L1/SM ([[Kibibyte|KiB]])
!L2<br />([[Mebibyte|MiB]])
! Size
([[Gibibyte|GiB]])
! Bandwidth
([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width
([[bit]])
!Pixel
([[Pixel|GP]]/s)
!Texture
([[Texel (graphics)|GT]]/s)
![[Single precision floating-point format|Single precision]]
![[Double precision floating-point format|Double precision]]
![[Direct3D]]
![[OpenGL]]
![[Vulkan (API)|Vulkan]]
![[OpenCL]]
![[CUDA]]
|-
!Quadro GV100<ref name="quadro-gv100">{{cite web|url=https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/documents/quadro-volta-gv100-us-nv-623049-r10-hr.pdf|title=Nvidia Quadro GV100 Data Sheet|website=NVIDIA|language=en-us|access-date=2018-11-06|archive-url=https://web.archive.org/web/20180401003614/https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/documents/quadro-volta-gv100-us-nv-623049-r10-hr.pdf|archive-date=2018-04-01|url-status=live}}</ref>
|Mar 27, 2018
|GV100-875-A1
|12
|PCIe 3.0 x16
|1132
|1627
|848 (1696)
|5120:320:128:80:640
|128
|6
|32
|870
|HBM2
|4096
|208.4
|521
|14.8
|7.4
|12.1
|4.6
|1.3
|3.0
|7.0
|250
|4x DisplayPort, NVLINK support
|}
=== Quadro RTX x000 / Tx00 / Tx000 series ===
{{Further|Quadro|Turing (microarchitecture)}}
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s: streaming multiprocessors: tensor cores
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan="2" | Model
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]<ref name="NVdevsuplist" />
! rowspan="2" | Fab
([[Nanometer|nm]])
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" | Core clock
([[Hertz|MHz]])
! rowspan="2" | Boost clock
([[Hertz|MHz]])
! rowspan="2" | Memory clock
([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan="2" |[[GPU cache|Cache]]
! colspan="4" | Memory
! colspan="2" |[[Fillrate]]<ref name="GPUdb" />
! colspan="2" | Processing power (T[[FLOPS]])<ref name="GPUdb" />
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan="2" |[[Thermal design power|TDP]]
(Watts)
! rowspan="2" |Release price (USD)
! rowspan="2" | Notes
|-
!L1/SM ([[Kibibyte|KiB]])
!L2<br />([[Mebibyte|MiB]])
! Size
([[Gibibyte|GiB]])
! Bandwidth
([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width
([[bit]])
!Pixel
([[Pixel|GP]]/s)
!Texture
([[Texel (graphics)|GT]]/s)
![[Single precision floating-point format|Single precision]]
![[Double precision floating-point format|Double precision]]
![[Direct3D]]
![[OpenGL]]
![[Vulkan (API)|Vulkan]]
![[OpenCL]]
![[CUDA]]
|-
!Quadro RTX 4000
|Nov 13, 2018
|TU104-850-A1
| rowspan="4" | 12
| rowspan="4" | PCIe 3.0 x16
|1005
|1545
|1625<br />(13000)
|2304:144:64:36:288
| rowspan="4" |64
| rowspan="2" |4
|8
|416
| rowspan="4" | GDDR6
| rowspan="2" | 256
|98.9
|222.5
|7.119
|0.2225
| rowspan="4" | 12.2
| rowspan="4" | 4.6
| rowspan="4" | 1.3
| rowspan="4" | 3.0
| rowspan="4" | 7.5
|100-125
|$899
| rowspan="4" | 3x DisplayPort
1x USB Type-C
|-
!Quadro RTX 5000
|Aug 13, 2018
|TU104-875-A1
|1620
|1815
| rowspan="3" |1750<br />(14000)
|3072:192:64:48:384
|16
|448
|116.2
|348.5
|11.15
|0.3485
|125-230
|$2299
|-
!Quadro RTX 6000
|Aug 13, 2018
| rowspan="2" |TU102-875-A1
|1440
| rowspan="2" |1770
| rowspan="2" |4608:288:96:72:576
| rowspan="2" |6
|24
| rowspan="2" | 672
| rowspan="2" | 384
| rowspan="2" |169.9
| rowspan="2" |509.8
| rowspan="2" |16.31
| rowspan="2" |0.5098
| rowspan="2" |100-260
|$6299
|-
!Quadro RTX 8000
|Aug 13, 2018
|1395
|48
|$9999
|}
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s: streaming multiprocessors
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan="2" | Model
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]<ref name="NVdevsuplist" />
! rowspan="2" | Fab
([[Nanometer|nm]])
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" | Core clock
([[Hertz|MHz]])
! rowspan="2" | Boost clock
([[Hertz|MHz]])
! rowspan="2" | Memory clock
([[Transfer (computing)|MT]])
! rowspan="2" | Core config<sup>1</sup>
! colspan="2" |[[GPU cache|Cache]]
! colspan="4" | Memory
! colspan="2" |[[Fillrate]]<ref name="GPUdb" />
! colspan="2" | Processing power (T[[FLOPS]])<ref name="GPUdb" />
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan="2" |[[Thermal design power|TDP]]
(Watts)
! rowspan="2" | Notes
|-
!L1/SM ([[Kibibyte|KiB]])
!L2<br />([[Mebibyte|MiB]])
! Size
([[Gibibyte|GiB]])
! Bandwidth
([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width
([[bit]])
!Pixel
([[Pixel|GP]]/s)
!Texture
([[Texel (graphics)|GT]]/s)
![[Single precision floating-point format|Single precision]]
![[Double precision floating-point format|Double precision]]
![[Direct3D]]
![[OpenGL]]
![[Vulkan (API)|Vulkan]]
![[OpenCL]]
![[CUDA]]
|-
!NVIDIA T400<ref name="Graphics Cards for Professional Des">{{Cite web|url=https://www.nvidia.com/en-us/design-visualization/desktop-graphics/|title=NVIDIA RTX in Professional Workstations|website=NVIDIA}}</ref><ref>{{cite web |url=https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/productspage/quadro/quadro-desktop/nvidia-t400-datasheet-1987150-r3.pdf |title=NVIDIA T400 datasheet |website=www.nvidia.com}}</ref><ref>{{Cite web|url=https://www.techpowerup.com/gpu-specs/t400.c3808|title=NVIDIA T400 Specs {{pipe}} TechPowerUp GPU Database|accessdate=15 April 2024}}</ref>
|May 6, 2021
|TU117
| rowspan="3" | [[TSMC]]<br />[[Die shrink#Half-shrink|12FFN]]
| rowspan="3" | PCIe 3.0 x16
|420
|1425
| rowspan="3" | 10000
|384:24:16:6
| rowspan="3" |64
| rowspan="3" |1
|2<br />4
|80
| rowspan="3" | GDDR6
| 64
|22.8
|34.2
|1.09
|0.0341
| rowspan="3" | 12.1
| rowspan="3" | 4.6
| rowspan="3" | 1.3
| rowspan="3" | 3.0
| rowspan="3" | 7.5
|30
|3x Mini-DisplayPort
|-
!NVIDIA T600<ref name="Graphics Cards for Professional Des"/><ref>{{cite web |url=https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/productspage/quadro/quadro-desktop/proviz-print-nvidia-T600-datasheet-us-nvidia-1670029-r5-web.pdf |title=NVIDIA T600 datasheet |website=www.nvidia.com}}</ref><ref>{{Cite web|url=https://www.techpowerup.com/gpu-specs/t600.c3796|title=NVIDIA T600 Specs {{pipe}} TechPowerUp GPU Database|accessdate=15 April 2024}}</ref>
|Apr 12, 2021
|TU117-850-A1
|735
|1335
|640:40:32:10
|4
| rowspan="2" | 160
| rowspan="2" | 128
|42.7
|53.4
|1.7
|0.0531
|40
| rowspan="2" | 4x Mini-DisplayPort
|-
!NVIDIA T1000<ref name="Graphics Cards for Professional Des"/><ref>{{cite web |url=https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/productspage/quadro/quadro-desktop/nvidia-t1000-datasheet-1987414-r4.pdf |title=NVIDIA T1000 datasheet |website=www.nvidia.com}}</ref><ref>{{Cite web|url=https://www.techpowerup.com/gpu-specs/t1000.c3797|title=NVIDIA T1000 Specs {{pipe}} TechPowerUp GPU Database|accessdate=15 April 2024}}</ref>
|May 6, 2021
|TU117
|1065
|1395
|896:56:32:14
|4<br />8
|44.6
|78.1
|2.5
|0.0781
|50
|}
Mobile version of the Quadro RTX / T x000 series.
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s: streaming multiprocessors: tensor cores (or FP16 Cores in T x000 Series)
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan="2" | Model{{spaces}}name
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]<ref name="NVdevsuplist" />
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" | Core clock ([[Hertz|MHz]])
! rowspan="2" | Boost clock ([[Hertz|MHz]])
! rowspan="2" | Core config <sup>1</sup>
! rowspan="2" |[[GPU cache|L2 Cache]] ([[Mebibyte|MiB]])
! colspan="5" | Memory
! colspan="2" |[[Fillrate]]
! colspan="3" |Processing power (T[[FLOPS]])
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan="2" |[[Thermal design power|TDP]] (Watts)
! rowspan="2" | Notes
|-
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
!Memory clock ([[Hertz|MHz]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
![[Half precision floating-point format|Half precision]]
![[Single precision floating-point format|Single precision]]
![[Double precision floating-point format|Double precision]]
![[Direct3D]]
![[OpenGL]]
![[Vulkan (API)|Vulkan]]
![[OpenCL]]
![[CUDA]]
|-
! rowspan="2" |Quadro T500 Mobile<ref name="hardwareluxx1"/>
| rowspan="2" |Dec 2, 2020
| rowspan="2" |TU117 (N19P-Q1-A1)
| rowspan="15" |12
| rowspan="15" |PCIe 3.0
| rowspan="2" | 1365
| rowspan="2" | 1695
| rowspan="2" | 896:56 :32:14:56
| rowspan="8" | 1
|2
| rowspan="2" |80
| rowspan="2" |1250
| rowspan="8" |GDDR5
| rowspan="2" |64
|54.24
|94.92
|5.591
|2.796
|0.087
| rowspan="15" |12.1
| rowspan="15" |4.6
| rowspan="15" |1.2
| rowspan="15" |3.0
| rowspan="15" |7.5
|25
| rowspan="15" |
|-
| rowspan="7" |4
|49.92
|87.36
|6.075
|3.037
|0.095
|18
|-
!Quadro T550 Mobile<ref name="hardwareluxx1"/>
|May 2022
|TU117
|1065
|1665
|1024:64 :32:26:64
|112
|1500
|64
|53.28
|106.6
|6.82
|3.41
|0.107
|23
|-
!Quadro T600 Mobile<ref name="hardwareluxx1"/>
|Apr 12, 2021
|TU117
|780
|1410
|896:56 :32:14:56
|192
|1500
|128
|45.12
|78.96
|5.053
|2.527
|0.079
|40
|-
!Quadro T1000 Mobile<ref name=":2">{{cite web|url=https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/documents/quadro-mobile-line-card-n18-11x8.5-r4-hr.pdf|title=NVIDIA PROFESSIONAL GRAPHICS SOLUTIONS|website=Nvidia.com}}</ref>
|May 27, 2019
|TU117 (N19P-Q1-A1)
|1395
|1455
|768:48 :32:12:1536
|128
|2000
|128
|46.56
|69.84
|5.215
|2.607
|0.082
|40-50
|-
!Quadro T1200 Mobile<ref name="hardwareluxx1"/>
|Apr 12, 2021
|TU117
|1515
|1785
|1024:64 :32:16:64
|224
|1500
|128
|57.12
|114.2
|7.311
|3.656
|0.1142
|18
|-
!Quadro T2000 Mobile<ref name=":2" />
|May 27, 2019
|TU117 (N19P-Q3-A1)
|1575
|1785
| rowspan="2" |1024:64 :32:16:2048
| rowspan="2" |128
|2001
| rowspan="2" |128
|57.1
|114.2
|7.311
|3.656
|0.114
|60
|-
!Quadro T2000 Max-Q<ref>{{Cite web |date=2024-09-29 |title=NVIDIA Quadro T2000 Max-Q Specs |url=https://www.techpowerup.com/gpu-specs/quadro-t2000-max-q.c3436 |access-date=2024-09-29 |website=TechPowerUp |language=en}}</ref>
|May 27, 2019
|TU117
|1035
|1395
|1250
|44.64
|89.28
|5.714
|2.857
|0.089
|40
|-
!Quadro RTX 3000 Mobile<ref name=":2" />
|May 27, 2019
|TU106 (N19E-Q1-KA-K1)
|945
|1380
| rowspan="2" |2304:144 :48:36:288
| rowspan="6" |4
| rowspan="2" |6
|448
|1750
| rowspan="7" |GDDR6
| rowspan="6" |256
|88.32
|198.7
|12.72
|6.359
|0.1987
|60-80
|-
!Quadro RTX 3000 Max-Q<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/quadro-rtx-3000-max-q.c3429|title=NVIDIA Quadro RTX 3000 Max-Q Specs|website=TechPowerUp|language=en|access-date=2020-03-04}}</ref>
|May 27, 2019
|TU106
|600
|1215
|416
|1625
|77.76
|175.0
|11.2
|5.6
|0.175
|60
|-
!Quadro RTX 4000 Mobile<ref name=":2" />
|May 27, 2019
|TU104 (N19E-Q3-A1)
|1110
|1560
| rowspan="2" |2560:160 :64:40:320
| rowspan="2" |8
|448
|1750
|99.84
|249.6
|15.97
|7.987
|0.25
|110
|-
!Quadro RTX 4000 Max-Q<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/quadro-rtx-4000-max-q.c3427|title=NVIDIA Quadro RTX 4000 Max-Q Specs|website=TechPowerUp|language=en|access-date=2020-03-04}}</ref>
|May 27, 2019
|TU104
|780
|1380
|416
|1625
|88.32
|220.8
|14.13
|7.066
|0.221
|80
|-
!Quadro RTX 5000 Mobile<ref name=":2" />
|May 27, 2019
|TU104 (N19E-Q5-A1)
|1035
|1530
| rowspan="2" |3072:192 :64:48:384
| rowspan="2" |16
|448
|1750
|98.88
|296.6
|18.98
|9.492
|0.297
|110
|-
!Quadro RTX 5000 Max-Q<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/quadro-rtx-5000-max-q.c3432|title=NVIDIA Quadro RTX 5000 Max-Q Specs|website=TechPowerUp|language=en|access-date=2025-04-16}}</ref>
|May 27, 2019
|TU104
|600
|1350
|384
|1500
|86.40
|259.2
|16.59
|8.294
|0.26
|80
|-
!Quadro RTX 6000 Mobile<ref>{{cite web|url=https://www.techpowerup.com/gpu-specs/quadro-rtx-6000-mobile.c3497|title=NVIDIA Quadro RTX 6000 Specs|website=TechPowerUp|language=en|access-date=2025-04-16}}</ref><ref>{{cite web|url=https://www.nvidia.com/en-us/design-visualization/quadro-in-laptops/asus-proart-studiobook-one/|title=NVIDIA Quadro RTX 6000 Mobile product page|website=Nvidia|language=en|access-date=2025-04-16}}</ref>
|Sep 4, 2019
|TU102
|1275
|1455
|4608:288 :96:72:576
|6
|24
|672
|1750
|384
|139.7
|419.0
|26.82
|13.41
|0.42
|80
|-
|}
=== RTX Ax000 series ===
{{Further|Quadro|Ampere (microarchitecture)}}
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s: streaming multiprocessors: tensor cores
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan="2" | Model{{spaces}}name
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]<ref name="NVdevsuplist" />
! rowspan="2" | Fab
([[Nanometer|nm]])
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" | Core clock ([[Hertz|MHz]])
! rowspan="2" | Boost clock ([[Hertz|MHz]])
! rowspan="2" | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan="2" |[[GPU cache|Cache]]
! colspan="4" | Memory
! colspan="3" |[[Fillrate]]<ref name="GPUdb" />
! colspan="4" | Processing power (T[[FLOPS]])<ref name="GPUdb" />
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan="2" |[[Thermal design power|TDP]] (Watts)
! rowspan="2" |Release price (USD)
! rowspan="2" | Notes
|-
!L1/SM ([[Kibibyte|KiB]])
!L2<br>([[Mebibyte|MiB]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
![[Ray tracing (graphics)|Ray Tracing]] (T[[FLOPS]])
![[Half precision floating-point format|Half precision]]
![[Single precision floating-point format|Single precision]]
![[Double precision floating-point format|Double precision]]
![[Tensor]] compute (FP16) (sparse)
![[Direct3D]]
![[OpenGL]]
![[Vulkan (API)|Vulkan]]
![[OpenCL]]
![[CUDA]]
|-
!RTX A400<ref>{{Cite web |title=NVIDIA RTX A400 |url=https://resources.nvidia.com/en-us-design-viz-stories-ep/nvidia-rtx-a400?lx=CCKW39 |access-date=2024-09-05 |website=NVIDIA |language=en}}</ref>
|{{Date table sorting|2024|Apr|16}}
|GA107
| rowspan="9" |8
|PCIe 4.0 x8
|727
|1762
|1500<br />(12000)
|768:24 :16:6:24
| rowspan="9" |128
| rowspan="2" |2
|4
|96
| rowspan="9" |GDDR6
|64
|28.19
|42.29
|5.4
|2.706
|2.706
|0.04229
|21.7
| rowspan="9" |12.2
| rowspan="9" |4.6
| rowspan="9" |1.3
| rowspan="9" |3.0
| rowspan="9" |8.6
|50
|$135
|4x mini DisplayPort
|-
!RTX A1000<ref>{{Cite web |title=NVIDIA RTX A1000 |url=https://resources.nvidia.com/en-us-design-viz-stories-ep/nvidia-rtx-a1000?lx=CCKW39 |access-date=2024-09-05 |website=NVIDIA |language=en}}</ref>
|{{Date table sorting|2024|Apr|16}}
|GA107
|PCIe 4.0 x8
|727
|1462
|1500<br />(12000)
|2304:72 :32:18:72
|8
|192
|128
|46.78
|105.3
|13.2
|6.737
|6.737
|0.1053
|53.8
|50
|$365
|4x mini DisplayPort
|-
! rowspan="2" | RTX A2000<ref>{{Cite web|url=https://www.nvidia.com/en-us/design-visualization/rtx-a2000/|title=Take Your Design Workflows to the Next Level|website=NVIDIA}}</ref><ref>{{cite web |url=https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/rtx-a2000/nvidia-rtx-a2000-datasheet.pdf |title=NVIDIA RTX A4000 Tensor Core Product Brief |website=www.nvidia.com}}</ref><ref>{{Cite web|url=https://www.pny.com/nvidia-rtx-a2000?iscommercial=true|title=Discover NVIDIA RTX A2000 {{pipe}} Graphics Card {{pipe}} pny.com|website=www.pny.com}}</ref><ref name="hardwareluxx1">{{Cite web|url=https://www.hardwareluxx.de/index.php/news/hardware/grafikkarten/58351-nvidia-aktualisiert-sein-notebook-gpu-lineup-plus-neue-desktop-karte.html|title=NVIDIA aktualisiert sein Notebook-GPU-Lineup plus neue Desktop-Karte|first=Andreas|last=Schilling|date=March 22, 2022|website=Hardwareluxx}}</ref>
| rowspan="2" |{{Date table sorting|2021|Aug|10}}
| rowspan="2" |GA106-850-A1
| rowspan="2" |PCIe 4.0 x16
| rowspan="2" |562
| rowspan="2" |1200
| rowspan="2" |1500<br>(12000)
| rowspan="2" |3328:104 :48:26:104
| rowspan="2" |3
| 6
| rowspan="2" |288
| rowspan="2" |192
| rowspan="2" |57.6
| rowspan="2" |124.8
| rowspan="2" |15.6
| rowspan="2" |7.987
| rowspan="2" |7.987
| rowspan="2" |0.1248
| rowspan="2" |63.9
| rowspan="2" |70
| rowspan="2" |$449
| rowspan="2" |4x mini DisplayPort
|-
|12
|-
! RTX A4000<ref>{{Cite web|url=https://www.nvidia.com/en-us/design-visualization/rtx-a4000/|title=NVIDIA RTX A4000 Graphics Card|website=NVIDIA}}</ref><ref>{{cite web |url=https://www.nvidia.com/content/dam/en-zz/Solutions/gtcs21/rtx-a4000/nvidia-rtx-a4000-datasheet.pdf |title=NVIDIA RTX A4000 Tensor Core Product Brief |website=www.nvidia.com}}</ref><ref>{{Cite web|url=https://www.pny.com/nvidia-rtx-a4000?iscommercial=true|title=Discover NVIDIA RTX A4000 GPU {{pipe}} pny.com|website=www.pny.com}}</ref>
|{{Date table sorting|2021|Apr|12}}
|GA104-875-A1
|PCIe 4.0 x16
|735
|1560
|1750<br>(14000)
|6144:192 :96:48:192
|4
|16
|448
|256
|149.769
|299.5
|37.4
|19.17
|19.17
|0.2995
|153.4
|140
|$1000
|4x DisplayPort
|-
! RTX A4500<ref>{{Cite web|url=https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/rtx/nvidia-rtx-a4500-datasheet.pdf|title=NVIDIA RTX A4500 Datasheet|accessdate=15 April 2024}}</ref>
|{{Date table sorting|2021|Nov|23}}
|GA102-825-KD-A1
|PCIe 4.0 x16
|1050
|1650
|2000<br>(16000)
|7168:224 :80:56:224
| rowspan="4" |6
|20
|640
|320
|132.008
|369.623
|46.2
|23.66
|23.66
|0.3696
|189.2
|200
|
|4x DisplayPort
|-
! RTX A5000<ref>{{Cite web|url=https://www.nvidia.com/en-us/design-visualization/rtx-a5000/|title=NVIDIA RTX A5000 Graphics Card|website=NVIDIA}}</ref><ref>{{cite web |url=https://www.nvidia.com/content/dam/en-zz/Solutions/gtcs21/rtx-a5000/nvidia-rtx-a5000-datasheet.pdf |title=NVIDIA RTX A5000 Tensor Core Product Brief |website=www.nvidia.com}}</ref><ref>{{Cite web|url=https://www.pny.com/nvidia-rtx-a5000?iscommercial=true|title=Discover NVIDIA RTX A5000 {{pipe}} Professional GPU {{pipe}} pny.com|website=www.pny.com}}</ref>
|{{Date table sorting|2021|Apr|12}}
|GA102-850-A1
|PCIe 4.0 x16
|1170
|1695
|2000<br>(16000)
|8192:256 :96:64:256
|24
|768
|384
|162.730
|433.947
|54.2
|27.77
|27.77
|0.4339
|222.2
|230
|$2250
|4x DisplayPort
|-
! RTX A5500<ref name="hardwareluxx1"/>
|{{Date table sorting|2022|Mar|22}}
|GA102-860-A1
|PCIe 4.0 x16
|1080
|1665
|2000<br>(16000)
|10240:320 :112:80:320
|24
|768
|384
|186.491
|532.834
|66.6
|34.10
|34.10
|0.5328
|272.8
|230
|$3600
|4x DisplayPort
|-
! RTX A6000<ref>{{Cite web|url=https://www.nvidia.com/en-us/design-visualization/rtx-a6000/|title=NVIDIA RTX A6000 Powered by Ampere Architecture|website=NVIDIA}}</ref><ref>{{cite web |url=https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/quadro-product-literature/proviz-print-nvidia-rtx-a6000-datasheet-us-nvidia-1454980-r9-web%20(1).pdf |title=NVIDIA RTX A6000 Tensor Core Product Brief |website=www.nvidia.com}}</ref><ref>{{Cite web|url=https://www.pny.com/nvidia-rtx-a6000|title=Discover NVIDIA RTX A6000 {{pipe}} Graphics Card {{pipe}} pny.com|website=www.pny.com}}</ref>
|{{Date table sorting|2020|Oct|5}}
|GA102-875-A1
|PCIe 4.0 x16
|1410
|1800
|2000<br />(16000)
|10752:336 :112:84:336
|48
|768
|384
|201.612
|604.838
|75.6
|38.71
|38.71
|0.6048
|309.7
|300
|$4649
|4x DisplayPort
|}
Mobile version of the RTX Ax000 series.
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan="2" | Model{{spaces}}name
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]<ref name="NVdevsuplist" />
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" | Core clock ([[Hertz|MHz]])
! rowspan="2" | Boost clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! rowspan="2" |[[GPU cache|L2 Cache]] ([[Mebibyte|MiB]])
! colspan="5" | Memory
! colspan="2" |[[Fillrate]]
! colspan="3" |Processing power (T[[FLOPS]])
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan="2" |[[Thermal design power|TDP]] (Watts)
! rowspan="2" |Release price (USD)
! rowspan="2" | Notes
|-
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
!Memory clock ([[Hertz|MHz]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
![[Half precision floating-point format|Half precision]]
![[Single precision floating-point format|Single precision]]
![[Double precision floating-point format|Double precision]]
![[Direct3D]]
![[OpenGL]]
![[Vulkan (API)|Vulkan]]
![[OpenCL]]
![[CUDA]]
|-
!RTX A500 Mobile<ref name="hardwareluxx1"/><ref>{{Cite web |date=2024-09-29 |title=NVIDIA RTX A500 Mobile Specs |url=https://www.techpowerup.com/gpu-specs/rtx-a500-mobile.c3939 |access-date=2024-09-29 |website=TechPowerUp |language=en}}</ref>
|Mar 22, 2022
|GA107S
| rowspan="10" |[[Samsung]] 8N
| rowspan="10" |PCIe 4.0
|832
|1537
|2048:64 :32:16:64
|2
|4
|112
|1500
| rowspan="10" |[[GDDR6 SDRAM|GDDR6]]
|64
|49.18
|98.37
|6.296
|6.296
|0.09837
| rowspan="10" |12.2
| rowspan="10" |4.6
| rowspan="10" |1.3
| rowspan="10" |3.0
| rowspan="10" |8.6
| rowspan="2" |60
|
|
|-
!RTX A1000 Mobile<ref name="hardwareluxx1"/><ref>{{Cite web |date=2024-09-29 |title=NVIDIA RTX A1000 Mobile Specs |url=https://www.techpowerup.com/gpu-specs/rtx-a1000-mobile.c3920 |access-date=2024-09-29 |website=TechPowerUp |language=en}}</ref>
|Mar 30, 2022
|GA107
|630
| rowspan="2" |1140
|2048:64 :32:16:64
| rowspan="2" |2
|4
|224
| rowspan="2" |1375
|128
|36.48
|72.96
|4.669
|4.669
|0.07296
|$365
|
|-
!RTX A1000 Mobile 6{{spaces}}GB<ref>{{Cite web |date=2024-09-29 |title=NVIDIA RTX A1000 Mobile 6 GB Specs |url=https://www.techpowerup.com/gpu-specs/rtx-a1000-mobile-6-gb.c4137 |access-date=2024-09-29 |website=TechPowerUp |language=en}}</ref>
|Mar 30, 2022
|GA107
|652
|2560:80 :32:20:80
|6
|168
|96
|36.48
|91.20
|5.837
|5.837
|0.0912
| rowspan="2" |95
|
|
|-
!RTX A2000 Mobile<ref>{{Cite web |date=2024-09-29 |title=NVIDIA RTX A2000 Mobile Specs |url=https://www.techpowerup.com/gpu-specs/rtx-a2000-mobile.c3827 |access-date=2024-09-29 |website=TechPowerUp |language=en}}</ref>
|Apr 12, 2021
|GA106
|1215
|1687
|2560:80 :48:20:80
|2
|4
|192
|1500
|128
|80.98
|135.0
|8.637
|8.637
|0.135
|
|
|-
!RTX A3000 Mobile<ref>{{Cite web |date=2024-09-29 |title=NVIDIA RTX A3000 Mobile Specs |url=https://www.techpowerup.com/gpu-specs/rtx-a3000-mobile.c3806 |access-date=2024-09-29 |website=TechPowerUp |language=en}}</ref>
|Apr 12, 2021
|GA104-A1
|600
|1230
| rowspan="2" |4096:128 :64:32:128
| rowspan="2" |4
|6
|264
|1375
| rowspan="2" |192
|78.72
|157.4
|10.08
|10.08
|0.1574
| 70
|
|
|-
!RTX A3000 Mobile 12{{spaces}}GB<ref name="hardwareluxx1" /><ref>{{Cite web |date=2024-09-29 |title=NVIDIA RTX A3000 Mobile 12 GB Specs |url=https://www.techpowerup.com/gpu-specs/rtx-a3000-mobile-12-gb.c3903 |access-date=2024-09-29 |website=TechPowerUp |language=en}}</ref>
|Mar 22, 2022
|GA104-A1
|855
|1440
|12
|336
|1750
|92.16
|184.3
|11.8
|11.8
|0.1843
|115
|
|
|-
!RTX A4000 Mobile<ref>{{Cite web |date=2024-09-29 |title=NVIDIA RTX A4000 Mobile Specs |url=https://www.techpowerup.com/gpu-specs/rtx-a4000-mobile.c3804 |access-date=2024-09-29 |website=TechPowerUp |language=en}}</ref>
|Apr 12, 2021
|GA104-A1
|1140
|1680
|5120:160 :80:40:160
|4
|8
|384
|1500
| rowspan="4" |256
|134.4
|268.8
|17.2
|17.2
|0.2688
| rowspan="2" |140
|
|
|-
!RTX A4500 Mobile<ref name="hardwareluxx1" /><ref>{{Cite web |date=2024-09-29 |title=NVIDIA RTX A4500 Mobile Specs |url=https://www.techpowerup.com/gpu-specs/rtx-a4500-mobile.c3851 |access-date=2024-09-29 |website=TechPowerUp |language=en}}</ref>
|Mar 22, 2022
|GA104-A1
|930
|1500
|5888:184 :96:46:184
|4
| rowspan="3" |16
|512
|2000
|144.0
|276.0
|17.66
|17.66
|0.276
|
|
|-
!RTX A5000 Mobile<ref name="hardwareluxx1" /><ref>{{Cite web |date=2024-09-29 |title=NVIDIA RTX A5000 Mobile Specs |url=https://www.techpowerup.com/gpu-specs/rtx-a5000-mobile.c3805 |access-date=2024-09-29 |website=TechPowerUp |language=en}}</ref>
|Apr 12, 2021
|GA104-A1
|900
|1575
|6144:192 :96:48:192
|4
|448
|1750
|151.2
|302.4
|19.35
|19.35
|0.3024
|150
|
|
|-
!RTX A5500 Mobile<ref>{{Cite web |title=ampere-mobile-line-card-us-web.pdf |url=https://nvdam.widen.net/s/97whpwwqqb/ampere-mobile-line-card-us-web |access-date=2023-04-18 |website=nvdam.widen.net |archive-date=2023-04-18 |archive-url=https://web.archive.org/web/20230418135351/https://nvdam.widen.net/s/97whpwwqqb/ampere-mobile-line-card-us-web |url-status=dead }}</ref><ref name="hardwareluxx1" /><ref>{{Cite web |date=2024-09-29 |title=NVIDIA RTX A5500 Mobile Specs |url=https://www.techpowerup.com/gpu-specs/rtx-a5500-mobile.c3902 |access-date=2024-09-29 |website=TechPowerUp |language=en}}</ref>
|Mar 22, 2022
|GA103-A1
|975
|1500
|7424:232 :96:58:232
|4
|512
|2000
|159.85
|386.30
|22.27
|22.27
|0.384
|165
|
|
|}
=== RTX Ada Generation ===
{{Further|Quadro|Ada Lovelace (microarchitecture)}}
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan="2" | Model{{spaces}}name
! rowspan="2" |Launch
! rowspan="2" |[[Code name|Code{{spaces}}name]]<ref name="NVdevsuplist" />
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" | Core clock ([[Hertz|MHz]])
! rowspan="2" | Boost clock ([[Hertz|MHz]])
! rowspan="2" | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan="2" |[[GPU cache|Cache]]
! colspan="4" | Memory
! colspan="3" |[[Fillrate]]<ref name="GPUdb" />
! colspan="4" | Processing power (T[[FLOPS]])<ref name="GPUdb" />
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan="2" |[[Thermal design power|TDP]] (Watts)
! colspan="2" |Size
! rowspan="2" |Release price (USD)
! rowspan="2" |Notes
|-
!L1/SM ([[Kibibyte|KiB]])
!L2<br>([[Mebibyte|MiB]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
![[Ray tracing (graphics)|Ray Tracing]] (T[[FLOPS]])
![[Half precision floating-point format|Half precision]]
![[Single precision floating-point format|Single precision]]
![[Double precision floating-point format|Double precision]]
![[Tensor]] compute (FP16) (sparse)
![[Direct3D]]
![[OpenGL]]
![[Vulkan (API)|Vulkan]]
![[OpenCL]]
![[CUDA]]
!Profile
!Slots
|-
! RTX 2000 Ada Generation<ref>{{Cite web |title=NVIDIA RTX 2000 Ada Generation |url=https://resources.nvidia.com/en-us-design-viz-stories-ep/proviz-rtx-2000-ada-datasheet |access-date=2024-06-15 |website=NVIDIA |language=en}}</ref><ref>{{Cite web |date=2024-06-15 |title=NVIDIA RTX 2000 Ada Generation Specs |url=https://www.techpowerup.com/gpu-specs/rtx-2000-ada-generation.c4199 |access-date=2024-06-15 |website=TechPowerUp |language=en}}</ref>
|{{Date table sorting|2024|Feb|12}}
|AD107-875-A1
| rowspan="7" |[[TSMC]] [[5 nm process|4N]]
|PCIe 4.0 x8
|1620
|2130
|2000
|2816:88 :48:22:88
| rowspan="7" |128
|12
|16
|224
| rowspan="7" |GDDR6
|128
|102.2
|187.4
|27.7
|12.0
|12.0
|0.1874
|191.9
| rowspan="7" |12.2
| rowspan="7" |4.6
| rowspan="7" |1.3
| rowspan="7" |3.0
| rowspan="7" |8.9
|70
|HHHL
|Double
|$649
|4x mini DisplayPort
|-
!RTX 4000 SFF Ada Generation<ref>{{Cite web |title=NVIDIA RTX 4000 SFF Ada Generation Graphics Card |url=https://www.nvidia.com/content/dam/en-zz/Solutions/rtx-4000-sff/proviz-rtx-4000-sff-ada-datasheet-2616456-web.pdf |access-date=Oct 10, 2023 |website=Nvidia}}</ref>
|{{Date table sorting|2023|Mar|21}}
|AD104
|PCIe 4.0 x16
|720
|1560
|1750<br />(14000)
|6144:192 :80:48:192
| rowspan="3" |48
|20
|280
|160
|125.2
|300.5
|44.3
|19.23
|19.23
|0.3
|306.8
|70
|HHHL
|Double
|$1250
|4x mini DisplayPort
|-
!RTX 4000 Ada Generation<ref>{{Cite web |title=NVIDIA RTX 4000 Ada Generation Datasheet |url=https://resources.nvidia.com/en-us-design-viz-stories-ep/rtx-4000-ada-datashe |access-date=2023-10-10 |website=NVIDIA |language=en}}</ref>
|{{Date table sorting|2023|Aug|9}}
|AD104
|PCIe 4.0 x16
|1500
|2175
|1750<br>(14000)
|6144:192 :80:48:192
|20
|360
|160
|174
|417.6
|61.8
|26.73
|26.73
|0.417
|427.6
|130
|FHFL
|Single
|$1250
|4x DisplayPort
|-
!RTX 4500 Ada Generation<ref>{{Cite web |title=NVIDIA RTX 4500 Ada Generation Graphics Card |url=https://resources.nvidia.com/en-us-design-viz-stories-ep/print-nvidia-rtx-450?lx=CCKW39&contentType=data-sheet |access-date=2023-10-10 |website=NVIDIA |language=en-us}}</ref>
|{{Date table sorting|2023|Aug|9}}
|AD103
|PCIe 4.0 x16
|2070
|2580
|2250
(18000)
|7680:240 :80:60:240
|24
|432
|192
|206
|620
|91.6
|39.63
|39.63
|0.619
|637.8
|210
|FHFL
|Double
|$2250
|4x DisplayPort
|-
!RTX 5000 Ada Generation<ref>{{Cite web |title=NVIDIA RTX 5000 Ada Generation Datasheet |url=https://resources.nvidia.com/en-us-design-viz-stories-ep/rtx-5000-ada-datasheet |access-date=2023-10-10 |website=NVIDIA |language=en}}</ref>
|{{Date table sorting|2023|Aug|9}}
|AD102-850-<br>KAB-A1
|PCIe 4.0 x16
|1155
|2550
|2250 (18000)
|12800:400 :160:100:400
| rowspan="2" |72
|32
|576
|256
|408.0
|1020
|151.0
|65.28
|65.28
|1.02
|1044
|250
|FHFL
|Double
|$4000
|4x DisplayPort
|-
!RTX 5880 Ada Generation<ref>{{Cite web |title=NVIDIA RTX 5880 Ada Generation |url=https://www.nvidia.com/en-us/design-visualization/rtx-5880/ |access-date=Jan 5, 2024 |website=NVIDIA}}</ref>
|{{Date table sorting|2024|Jan|5}}
|AD102
|PCIe 4.0 x16
|975
|2460
|2500<br>(20000)
|14080:440 :176:110:440
|48
|960
|384
|433.0
|1082
|160.2
|69.27
|69.27
|1.082
|1108
|285
|FHFL
|Double
|$6999
|4x DisplayPort
|-
!RTX 6000 Ada Generation<ref>{{Cite web |title=NVIDIA RTX 6000 Ada Generation |url=https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/rtx-6000/proviz-print-rtx6000-datasheet-web-2504660.pdf |access-date=Oct 10, 2023 |website=NVIDIA}}</ref>
|{{Date table sorting|2022|Dec|3}}
|AD102-870-A1
|PCIe 4.0 x16
|915
|2505
|2500<br>(20000)
|18176:568 :192:142:568
|96
|48
|960
|384
|481.0
|1423
|210.6
|91.06
|91.06
|1.423
|1457
|300
|FHFL
|Double
|$6799
|4x DisplayPort
|}
Mobile version of the RTX Ada Generation.
* <sup>1</sup> CUDA cores: RT cores: Tensor cores
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan="2" | Model{{spaces}}name
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]<ref name="NVdevsuplist" />
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" | Core clock ([[Hertz|MHz]])
! rowspan="2" | Boost clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan="2" |[[GPU cache|Cache]]
! colspan="5" | Memory
! colspan="2" |[[Fillrate]]
! colspan="4" |Processing power (T[[FLOPS]])
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan="2" |[[Thermal design power|TDP]] (Watts)
! rowspan="2" | Notes
|-
!L1/SM ([[Kibibyte|KiB]])
!L2<br>([[Mebibyte|MiB]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
!Memory clock ([[Hertz|MHz]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
![[Half precision floating-point format|Half precision]]
![[Single precision floating-point format|Single precision]]
![[Double precision floating-point format|Double precision]]
!Tensor
![[Direct3D]]
![[OpenGL]]
![[Vulkan (API)|Vulkan]]
![[OpenCL]]
![[CUDA]]
|-
!RTX 500 Mobile Ada Generation<ref name=":15">{{Cite web |title=proviz-rtx-mobile-line-card.pdf |url=https://images.nvidia.com/aem-dam/en-zz/Solutions/design-visualization/documents/proviz-rtx-mobile-line-card.pdf |access-date=12 June 2024 |website=nvidia.com}}</ref><ref>{{cite web |url=https://www.techpowerup.com/gpu-specs/rtx-500-mobile-ada-generation.c4207 |title=NVIDIA RTX 500 Mobile Ada Generation}}</ref>
| rowspan="2" |Feb, 2024
|AD107
| rowspan="7" |[[TSMC]] [[5 nm process|4N]]
| rowspan="7" |PCIe 4.0
|1485
|2025
|2048:16:64
| rowspan="7" |128
| rowspan="2" |12
|4
|128
| rowspan="4" |2000
| rowspan="7" |[[GDDR6 SDRAM|GDDR6]]
|64
|64.8
|129.6
|8.294
|8.294
|0.1296
|147.4
| rowspan="7" |12.2
| rowspan="7" |4.6
| rowspan="7" |1.3
| rowspan="7" |3.0
| rowspan="7" |8.9
|35–60
|
|-
!RTX 1000 Mobile Ada Generation<ref name=":15" /><ref>{{cite web |url=https://www.techpowerup.com/gpu-specs/rtx-1000-mobile-ada-generation.c4208 |title=NVIDIA RTX 1000 Mobile Ada Generation}}</ref>
|AD107
|1485
|2025
|2560:20:80
|6
|192
|96
|97.2
|162.0
|10.37
|10.37
|0.162
|193.0
|35–140
|
|-
!RTX 2000 Mobile Ada Generation<ref name=":12">{{Cite web |title=proviz-mobile-linecard-update-2653183.pdf |url=https://nvdam.widen.net/s/dmdqnnwcmk/proviz-mobile-linecard-update-2653183 |access-date=2023-06-18 |website=nvdam.widen.net}}</ref><ref>{{cite web |url=https://www.techpowerup.com/gpu-specs/rtx-2000-mobile-ada-generation.c4093 |title=NVIDIA RTX 2000 Mobile Ada Generation}}</ref>
| rowspan="5" |Mar, 2023
|AD107
|1635
|2115
|3072:24:96
|24
|8
|256
|128
|101.5
|203.0
|12.99
|12.99
|0.203
|231.6
|35–140
|
|-
!RTX 3000 Mobile Ada Generation<ref name=":12" /><ref>{{cite web |url=https://www.techpowerup.com/gpu-specs/rtx-3000-mobile-ada-generation.c4095 |title=NVIDIA RTX 3000 Mobile Ada Generation}}</ref>
|AD106
|1395
|1695
|4608:36:144
|32
|8 (ECC)
|256
|128
|81.36
|244.1
|15.62
|15.62
|0.2441
|318.6
|35–140
|
|-
!RTX 3500 Mobile Ada Generation<ref name=":12" /><ref>{{cite web |url=https://www.techpowerup.com/gpu-specs/rtx-3500-mobile-ada-generation.c4098 |title=NVIDIA RTX 3500 Mobile Ada Generation}}</ref>
|AD104
|1110
|1545
|5120:40:160
| rowspan="2" |48
|12 (ECC)
|432
| rowspan="3" |2250
|192
|98.88
|247.2
|15.82
|15.82
|0.2472
|368.6
|60–140
|
|-
!RTX 4000 Mobile Ada Generation<ref name=":12" /><ref>{{cite web |url=https://www.techpowerup.com/gpu-specs/rtx-4000-mobile-ada-generation.c4096 |title=NVIDIA RTX 4000 Mobile Ada Generation}}</ref>
|AD104
|1290
|1665
|7424:58:232
|12 (ECC)
|432
|192
|133.2
|386.3
|24.72
|24.72
|0.3863
|538.0
|60–175
|
|-
!RTX 5000 Mobile Ada Generation<ref name=":12" /><ref>{{cite web |url=https://www.techpowerup.com/gpu-specs/rtx-5000-mobile-ada-generation.c4097 |title=NVIDIA RTX 5000 Mobile Ada Generation}}</ref>
|AD103
|1425
|2115
|9728:76:304
|64
|16 (ECC)
|576
|256
|236.9
|643.0
|41.15
|41.15
|0.643
|681.8
|60–175
|
|}
=== RTX PRO Blackwell series ===
{{Further|Blackwell (microarchitecture)}}
* <sup>1</sup> [[Unified shader model|Unified shaders]]: [[texture mapping unit]]s: [[render output unit]]s: Tensor cores: RT cores
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan="2" | Model{{spaces}}name
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]<ref name="NVdevsuplist" />
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" | Core clock ([[Hertz|MHz]])
! rowspan="2" | Boost clock ([[Hertz|MHz]])
! rowspan="2" | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan="2" |[[GPU cache|Cache]]
! colspan="4" | Memory
! colspan="3" |[[Fillrate]]<ref name="GPUdb" />
! colspan="4" | Processing power (T[[FLOPS]])<ref name="GPUdb" />
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan="2" |[[Thermal design power|TDP]] (Watts)
! colspan="2" |Size
! rowspan="2" |Release price (USD)
! rowspan="2" |Notes
|-
!L1/SM ([[Kibibyte|KiB]])
!L2<br>([[Mebibyte|MiB]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
![[Ray tracing (graphics)|Ray Tracing]] (T[[FLOPS]])
![[Half precision floating-point format|Half precision]]
![[Single precision floating-point format|Single precision]]
![[Double precision floating-point format|Double precision]]
![[Tensor]] compute (FP16) (sparse)
![[Direct3D]]
![[OpenGL]]
![[Vulkan (API)|Vulkan]]
![[OpenCL]]
![[CUDA]]
!Profile
!Slots
|-
!RTX PRO 4000 Blackwell
|{{Date table sorting|2025|Mar|18}}
|rowspan="2" |GB203
| rowspan="5" |[[TSMC]] [[5 nm process|4N]]
| rowspan="5" |PCIe 5.0 x16
|1590
|2617
|rowspan="5" |1750
|8960:280 :96:280:70
| rowspan="5" |128
|48
|24
|672
| rowspan="5" |GDDR7
|192
|251.2
|732.8
|TBD
|46.90
|46.90
|0.7328
|TBD
| rowspan="5" |12.2
| rowspan="5" |4.63
| rowspan="5" |1.3
| rowspan="5" |3.0
| rowspan="5" |11.6
|140
|FHFL
|Single
|$1546
|rowspan="5" |4x Display Port
|-
!RTX PRO 4500 Blackwell
|{{Date table sorting|2025|Mar|18}}
|1590
|2617
|10496:328 :112:328:82
|64
|32
|896
|256
|293.1
|858.4
|TBD
|54.94
|54.94
|0.8584
|TBD
|200
|FHFL
|Double
|$2623
|-
!RTX PRO 5000 Blackwell
|{{Date table sorting|2025|Mar|18}}
|rowspan="3" |GB202
|1590
|2617
|14080:440 :176:440:110
|96
|48
|1344
|384
|460.6
|1151
|TBD
|73.69
|73.69
|1.151
|TBD
|rowspan="2" |300
|FHFL
|Double
|$4569
|-
! RTX PRO 6000 Blackwell Max-Q Workstation Edition
|{{Date table sorting|2025|Mar|18}}
| rowspan="2" |1590
|2288
|rowspan="2" |24064:752 :192:752:188
|rowspan="2" |128
|rowspan="2" |96
|rowspan="2" |1792
|rowspan="2" |512
|439.3
|1721
|330
|110.1
|110.1
|1.721
|TBD
|FHFL
|Double
|rowspan="2" |$8565
|-
! RTX PRO 6000 Blackwell Workstation Edition
|{{Date table sorting|2025|Mar|18}}
|2617
|502.5
|1968
|380
|126.0
|126.0
|1.968
|TBD
|600
|FHFL
|Double
|-
|}
Mobile/laptop version of the RTX PRO Blackwell series<ref>{{Cite web |title=NVIDIA Blackwell RTX PRO Comes to Workstations and Servers for Designers, Developers, Data Scientists and Creatives to Build and Collaborate With Agentic AI |url=https://nvidianews.nvidia.com/news/nvidia-blackwell-rtx-pro-workstations-servers-agentic-ai |access-date=2025-04-27 |website=NVIDIA Newsroom |language=en-us}}</ref>
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan="2" | Model{{spaces}}name
! rowspan="2" |Launch
! rowspan="2" |[[Code name]]<ref name="NVdevsuplist" />
! rowspan="2" | Fab ([[Nanometer|nm]])
! rowspan="2" |[[Computer bus|Bus]] [[I/O interface|interface]]
! rowspan="2" | Core clock ([[Hertz|MHz]])
! rowspan="2" | Boost clock ([[Hertz|MHz]])
! rowspan="2" | Memory clock ([[Hertz|MHz]])
! rowspan="2" | Core config<sup>1</sup>
! colspan="2" |[[GPU cache|Cache]]
! colspan="4" | Memory
! colspan="3" |[[Fillrate]]<ref name="GPUdb" />
! colspan="4" | Processing power (T[[FLOPS]])<ref name="GPUdb" />
! colspan="5" | Supported [[Application programming interface|API]] version
! rowspan="2" |[[Thermal design power|TDP]] (Watts)
! rowspan="2" |Release price (USD)
|-
!L1/SM ([[Kibibyte|KiB]])
!L2<br>([[Mebibyte|MiB]])
! Size ([[Gibibyte|GiB]])
! Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
! Bus type
! Bus width ([[bit]])
!Pixel ([[Pixel|GP]]/s)
!Texture ([[Texel (graphics)|GT]]/s)
![[Ray tracing (graphics)|Ray Tracing]] (T[[FLOPS]])
![[Half precision floating-point format|Half precision]]
![[Single precision floating-point format|Single precision]]
![[Double precision floating-point format|Double precision]]
![[Tensor]] compute (FP16) (sparse)
![[Direct3D]]
![[OpenGL]]
![[Vulkan (API)|Vulkan]]
![[OpenCL]]
![[CUDA]]
|-
!RTX PRO 500 Blackwell Mobile
|TBA
| rowspan="2" |GB207
| rowspan="6" |[[TSMC]] [[5 nm process|4N]]
| rowspan="6" |PCIe 5.0 x16
| rowspan="2" |2235
| rowspan="2" |2520
| rowspan="6" |1750
|1792:56 :24:56:14
| rowspan="6" |128
|24
|6
|336
| rowspan="6" |GDDR7
|96
|60.48
|141.1
|
|9.032
|9.032
|0.1411
|
| rowspan="6" |12.2
| rowspan="6" |4.6
| rowspan="6" |1.4
| rowspan="6" |3.0
| rowspan="6" |12.0
|35
|
|-
!RTX PRO 1000 Blackwell Mobile
|TBA
|2560:80 :32:80:20
|32
|8
|448
|128
|80.64
|201.6
|
|12.90
|12.90
|0.2016
|
|35
|
|-
!RTX PRO 2000 Blackwell Mobile
|TBA
|GB206
|952
|1455
|3328:104 :32:104:26
|32
|8
|448
|128
|46.56
|151.3
|
|9.684
|9.684
|0.1513
|
|45
|
|-
!RTX PRO 3000 Blackwell Mobile
|TBA
|GB205
|847
|1447
|5888:184 :80:184:46
|48
|12
|672
|192
|115.8
|266.2
|
|17.04
|17.04
|0.2662
|
|60
|
|-
!RTX PRO 4000 Blackwell Mobile
|TBA
|GB203
|975
|1500
|7680:240 :96:240:60
|64
|16
|896
|256
|144.0
|360.0
|
|23.04
|23.04
|0.360
|
|80
|
|-
!RTX PRO 5000 Blackwell Mobile
|TBA
|GB203
|990
|1515
|10496:328 :112:328:82
|64
|24
|896
|256
|169.7
|496.9
|
|31.80
|31.80
|0.4969
|
|95
|
|-
|}
== Tegra GPU ==
{{main|Tegra}}
==Data center GPUs==
===GRID===
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
! rowspan=2 | Model
! rowspan=2 | Archi-<br />tecture
! rowspan=2 | Chips
! rowspan=2 | Thread processors<br />(total)
! rowspan=2 | Bus interface
! colspan=2 | Memory
! rowspan=2 | [[Thermal design power|TDP]] (Watts)
|-
! Bus type
! Size ([[Gibibyte|GiB]])
|- valign="top"
!style="text-align:center"|GRID K1<ref>{{cite web |url=http://www.nvidia.com/content/grid/pdf/GRID_K1_BD-06633-001_v02.pdf |title=Nvidia Grid K1 Graphics Board |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20160303220227/http://www.nvidia.com/content/grid/pdf/GRID_K1_BD-06633-001_v02.pdf |archive-date=2016-03-03 |url-status=live }}</ref>
| rowspan="4" | Kepler
|4x GK107
|4x 192
| rowspan="4" |PCIe 3.0 x16
|DDR3
|4x 4 GiB
|130
|- valign="top"
! style="text-align:center;"|GRID K2<ref>{{cite web |url=http://www.nvidia.com/content/grid/pdf/grid_k2_bd-06580-001_v02.pdf |title=Nvidia Grid K2 Graphics Board |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20160304051836/http://www.nvidia.com/content/grid/pdf/grid_k2_bd-06580-001_v02.pdf |archive-date=2016-03-04 |url-status=live }}</ref>
|2x GK104-895
|2x 1536
| rowspan="3" |GDDR5
|2x 4 GiB
| rowspan="3" |225
|-
!GRID K340
|4x GK107
|4x 384
|4x 1 GiB
|-
!GRID K520
|2x GK104
|2x 1536
|2x 4 GiB
|}
* Data from GRID GPUS<ref>{{cite web |url=http://www.nvidia.com/object/grid-boards.html |title=Virtual GPU Technology for Hardware Acceleration {{pipe}} Nvidia GRID |website=Nvidia.com |access-date=2015-12-11 |archive-url=https://web.archive.org/web/20140630024448/http://www.nvidia.com/object/grid-boards.html |archive-date=2014-06-30 |url-status=live }}</ref>
===Tesla===
{{Further|Nvidia Tesla}}
{{Nvidia Tesla}}
{{anchor|Console_GPUs}}
<!-- Please move to correct section -->*A10G GPU accelerator (PCIe card)-300W TDP, Ampere, 24GB GDDR6@600GB/s, 80 RT Cores
==Console/handheld GPUs ==
<!-- {{no footnotes|section|date = October 2013}}-->
{{Row hover highlight}}
{| class="mw-datatable wikitable sortable sort-under" style="font-size:85%; text-align:center;"
|-
!rowspan=2|Model{{spaces}}name
!rowspan=2|Launch
!rowspan=2|[[Code name]]
!rowspan=2|[[Semiconductor device fabrication|Fab]] ([[nanometer|nm]])
!rowspan=2|[[Bus (computing)|Bus]] [[Input/output|interface]]
!rowspan=2|Core clock ([[Hertz|MHz]])
!rowspan=2|Memory clock ([[Hertz|MHz]])
!rowspan=2|Core config<sup>1,2,3</sup>
!colspan=4|Memory
! colspan="4" |[[Fillrate]]
! colspan="4" |Processing power ([[FLOPS|GFLOPS]])<sup>3</sup>
! Deep Learning
! colspan="4" |Latest supported [[API]] version
|-
!Size ([[Mebibyte|MiB]])
!Bandwidth ([[Data-rate units#Gigabyte per second|GB/s]])
!Bus type
!Bus width ([[bit]])
!MOps/s
![[Texel (graphics)|MTexels]]/s
![[Pixel|MPixels]]/s
!MTri/s
![[Half precision floating-point format|Half precision]]
![[Single precision floating-point format|Single precision]]
!Ray Tracing Performance
![[Tensor]] compute (FP16) (2:1 sparse)
!TOPS (INT8)
![[Direct3D]]
![[OpenGL]]
![[Vulkan]]
!Other
|-
!style="text-align:left"| XGPU{{spaces}}([[Xbox (console)|Xbox]])<ref name="anandtech">{{cite web |url=http://www.anandtech.com/show/853/2 |title=Anandtech Microsoft's Xbox |publisher=Anandtech.com |access-date=2010-11-11 |archive-url=https://web.archive.org/web/20101104081735/http://www.anandtech.com/show/853/2 |archive-date=2010-11-04 |url-status=dead }}</ref><ref>{{cite news |last1=Smith |first1=Tony |title=TSMC starts fabbing Nvidia Xbox chips |url=https://www.theregister.co.uk/2001/02/14/tsmc_starts_fabbing_nvidia_xbox/ |access-date=20 November 2019 |work=[[The Register]] |date=14 February 2001}}</ref>
|November 15, 2001
|[[NV25|NV2A]]
|[[TSMC]] [[180 nanometer|150{{nbsp}}nm]]
|Integrated
|233
|200
|4:2:8:4<sup>1</sup>
|64
|6.4
|DDR
|rowspan="2"|128
|5,800
|1,864
|932
|116.5
| -
|13.9
|N/A
|N/A
|N/A
|8.1
|1.4
| rowspan="2" |N/A
| rowspan="2" |N/A
|-
!style="text-align:left"|[[RSX Reality Synthesizer|RSX]] ([[PlayStation 3|PS3]])<ref name=tegrak1>{{cite web |url=http://pc.watch.impress.co.jp/docs/2005/0701/kaigai195.htm |title=PLAYSTATION 3のグラフィックスエンジンRSX |access-date=2016-10-07 |archive-url=https://web.archive.org/web/20160917174937/http://pc.watch.impress.co.jp/docs/2005/0701/kaigai195.htm |archive-date=2016-09-17 |url-status=live }}</ref><ref>{{cite web |title=NVIDIA Playstation 3 RSX 65nm Specs |url=https://www.techpowerup.com/gpu-specs/playstation-3-rsx-65nm.c1682 |website=TechPowerUp |access-date=21 June 2019}}</ref><ref name="shrink_plan">{{cite web|publisher=Edge Online|date=2008-06-26|title=PS3 Graphics Chip Goes 65nm in Fall|url=http://www.edge-online.com/news/ps3-graphics-chip-goes-65nm-fall/|archive-url=https://web.archive.org/web/20080725024026/http://www.edge-online.com/news/ps3-graphics-chip-goes-65nm-fall/|url-status=dead|archive-date=2008-07-25}}</ref>
|November 11, 2006
|[[GeForce 7 series|G70]]
|[[Toshiba]] [[90 nanometer|90]]/[[65 nanometer|65]]/[[40 nanometer|40 nm]]/[[28 nanometer|28 nm]]
|FlexIO
|500
|650
|24:8:24:8<sup>1</sup>
|256<br>256
|15 (write)<br>20 (read)<br>20.8
|[[XDR DRAM|XDR]]<br>GDDR3
|12,000
|12,000
|4,000
|250
| -
|192 + 153.6 (w/[[Cell (microprocessor)|Cell]] SPUs)
|N/A
|N/A
|N/A
| rowspan="4" |N/A
|ES 1.1 w/[[Cg (programming language)|Cg]]
|-
!style="text-align:left" rowspan="2"|[[Tegra X1|NX-SoC]] ([[Nintendo Switch]])<ref name=TegraX1>{{cite web |url=https://blogs.nvidia.com/blog/2016/10/20/nintendo-switch |title=NVIDIA Technology Powers New Home Gaming System, Nintendo Switch |date=20 October 2016 |access-date=2017-05-17 |archive-url=https://web.archive.org/web/20170518043218/https://blogs.nvidia.com/blog/2016/10/20/nintendo-switch/ |archive-date=2017-05-18 |url-status=live }}</ref><ref>{{cite news |last1=Quilty |first1=John |title=New Report Details Potential Hardware For Nintendo Switch Revision |url=https://techraptor.net/gaming/news/new-report-details-potential-hardware-for-nintendo-switch-revision |access-date=20 November 2019 |work=TechRaptor |date=June 28, 2019}}</ref>
|March 3, 2017
| rowspan="2" |[[Maxwell (microarchitecture)|GM20B]]
|[[TSMC]] [[20 nanometer|20 nm]]
| rowspan="2" |Integrated
| rowspan="2" |384.0 (Undocked)<br/>768.0 (Docked)
| rowspan="2" |1333 (Undocked)<br/>1600 (Docked)
| rowspan="2" |256:16:16<sup>2</sup>
| rowspan="2" |4096
| rowspan="2" |21.3 (Undocked) 25.6<br/>(Docked)
|LPDDR4
| rowspan="2" |64
| rowspan="2" |196,608 (Undocked)<br />393,216 (Docked)
| rowspan="2" |6,144 (Undocked)<br />12,288 (Docked)
| rowspan="2" |6,144 (Undocked)<br />12,288 (Docked)
| rowspan="2" |probably ~500/1000
| rowspan="2" |393.2 (Undocked)<br>786.4 (Docked)
| rowspan="2" |196.6 (Undocked)<br>393.2 (Docked)
| rowspan="2" |N/A
| rowspan="2" |N/A
|N/A
| rowspan="2" |4.6<ref name="SwitchGL">{{cite web |orig-date=2018-04-25 |title=Conformant Products - OpenGL |url=https://www.khronos.org/conformance/adopters/conformant-products/opengl#submission_193 |url-status=live |archive-url=https://web.archive.org/web/20250312172347/https://www.khronos.org/conformance/adopters/conformant-products/opengl#submission_193 |archive-date=2025-03-12 |access-date=2025-03-12 |publisher=[[Khronos Group]]}}</ref><br/>ES 3.2<ref name=SwitchGLES>{{cite web |url=https://www.khronos.org/conformance/adopters/conformant-products#opengles |title=Khronos Products - OpenGL ES |access-date=2017-06-11 |archive-url=https://web.archive.org/web/20170128195542/https://www.khronos.org/conformance/adopters/conformant-products#opengles |archive-date=2017-01-28 |url-status=live }}</ref>
| rowspan="2" |1.3<ref name="SwitchVulkan">{{cite web |date=2022-07-16 |title=Conformant Products - Vulkan |url=https://www.khronos.org/conformance/adopters/conformant-products/vulkan#submission_693 |url-status=live |archive-url=https://web.archive.org/web/20250312172754/https://www.khronos.org/conformance/adopters/conformant-products/vulkan#submission_693 |archive-date=2025-03-12 |access-date=2025-03-12 |publisher=[[Khronos Group]]}}</ref>
| rowspan="3" |Nvidia NVN<ref name="TegraX12">{{cite web|title=NVIDIA Technology Powers New Home Gaming System, Nintendo Switch|date=20 October 2016|url=https://blogs.nvidia.com/blog/2016/10/20/nintendo-switch|url-status=live|archive-url=https://web.archive.org/web/20170518043218/https://blogs.nvidia.com/blog/2016/10/20/nintendo-switch/|archive-date=2017-05-18|access-date=2017-05-17}}</ref>
|-
|August 30, 2019
|[[TSMC]] [[16 nanometer|16 nm]]
|LPDDR4X
|N/A
|-
!style="text-align:left"| [[Nintendo Switch 2|Drake (Nintendo Switch 2)]]
|June 5, 2025
|[[Ampere (microarchitecture)|GA10F]]<br>(Ampere)
|[[Samsung]] 8 nm
|Integrated
|561 (Undocked)
1007.25 (Docked)
|2133 (Undocked)<br>3200 (Docked)
|1536(12):48:16:12:48<sup>3</sup>
|12288
|68.26 (Undocked) 102.4<br />(Docked)
|LPDDR5X
|128
|1,723,400 (Undocked) 3,094,272 (Docked)
|26,928 (Undocked) 48,348 (Docked)
|8,976 (Undocked) 16,116 (Docked)
|probably >2000
|3,446.8 (Undocked) 6,188.6 (Docked)
|1,723.4 (Undocked) 3,094.3 (Docked)
|~4,010 (Undocked) ~7,200 (Docked)
|~22,000 (Undocked) ~40,000 (Docked)
|90 (Undocked) 160 (Docked)
|N/A
|1.4
|-
|}
* <sup>1</sup> [[Pixel shader]]s: [[vertex shader]]s: [[texture mapping unit]]s: [[render output unit]]s
* <sup>2</sup> [[Unified shader model|Unified shaders]]: Texture mapping units : Render output units
* <sup>3</sup> [[Ampere (microarchitecture)|Unified shaders (SM count)]]: Texture mapping units : Render output units : [[Nvidia RTX|Ray tracing cores]] : [[Tensor Core]]
==
* [[nouveau (software)]]
* [[Scalable Link Interface]] (SLI)
* [[TurboCache]]
* [[Tegra]]
* [[
* [[CUDA]]
*[[
*[[Nvidia NVENC]]
*[[Adreno|Qualcomm Adreno]]
*[[
* [[Comparison of Nvidia nForce chipsets]]
* [[List of AMD graphics processing units]]
* [[List of Intel graphics processing units]]
* [[List of eponyms of Nvidia GPU microarchitectures]]
* [[Imageon]] by [[ATI Technologies|ATI]] (Now [[Advanced Micro Devices|AMD]])
==
{{Reflist|colwidth=30em}}
==External links==
* [http://download.nvidia.com/developer/Papers/2005/OpenGL_2.0/NVIDIA_OpenGL_2.0_Support.pdf OpenGL 2.0 support on Nvidia GPUs (PDF document)]
* [http://developer.download.nvidia.com/opengl/glsl/glsl_release_notes.pdf Release Notes for Nvidia OpenGL Shading Language Support (PDF document)]
{{Nvidia}}
{{DEFAULTSORT:Nvidia graphics processing units}}
[[Category:Computing comparisons]]
[[Category:Nvidia graphics processors| ]]
[[Category:Graphics cards|*Nvidia]]
[[Category:Lists of microprocessors]]
|