Jump to content

User:Cdrzewiecki/sandbox

From Wikipedia, the free encyclopedia
GeForce 600 Series
Release dateMarch 22nd, 2012
CodenameFermi, Kepler
ModelsGeForce Series
  • GeForce GT Series
  • GeForce GTX Series
Transistors292M 40 nm (GF119)
  • 585M 40 nm (GF108)
  • 1,170M 40 nm (GF116)
  • 1,950M 40 nm (GF114)
  • 1,300M 28 nm (GK107)
  • 2,540M 28 nm (GK106)
  • 3,540M 28 nm (GK104)
Cards
Entry-levelGT 605, GT 610, GT 620, GT 630, GT 640
Mid-rangeGT 645, GTX 650, GTX 650 Ti , GTX 660, GTX 660 Ti
High-endGTX 670, GTX 680, GTX 690
API support
DirectXDirect3D 11
Shader Model 5.0
OpenCLOpenCL 1.2
OpenGLOpenGL 4.3
History
PredecessorGeForce 500 Series
SuccessorGeForce 700 Series

The GeForce 600 Series is a family of graphics processing units developed by Nvidia, used in desktop and laptop PCs. It serves as the introduction for the Kepler architecture (GK-codenamed chips), named after the German mathematician, astronomer, and astrologer Johannes Kepler.

Notable Improvements

[edit]

Where the goal of the previous architecture, Fermi, was to increase raw performance (particularly for tessellation), Nvidia's goal with the Kepler architecture was to increase performance per watt, while still striving for overall performance increases.[1] The primary way they achieved this goal was through the use of a unified clock. By abandoning the shader clock found in their previous GPU designs, efficiency is increased, even though it requires more cores to achieve similar levels of performance. This is not only because the cores are more power efficient (two Kepler cores using about 90% of the power of one Fermi core, according to Nvidia's numbers), but also because the reduction in clock speed delivers a 50% reduction in power consumption in that area.[2]

Kepler also introduced a new form of texture handling known as bindless textures. Previously, textures needed to be bound by the CPU to a particular slot in a fixed-size table before the GPU could reference them. This led to two limitations: one was that because the table was fixed in size, there could only be as many textures in use at one time as could fit in this table (128). The second was that the CPU was doing unnecessary work: it had to load each texture, and also bind each texture loaded in memory to a slot in the binding table.[1] With bindless textures, both limitations are removed. The GPU can access any texture loaded into memory, increasing the number of available textures and removing the performance penalty of binding.

Finally, with Kepler, Nvidia was able to increase the memory clock to 6 GHz. To accomplish this, they needed to design an entirely new memory controller and bus. While still shy of the theoretical 7 GHz limitation of GDDR5, this is well above the 4 GHz speed of the memory controller for Fermi.[2]

Features

[edit]

The GeForce 600 Series contains products from both the older Fermi and newer Kepler generations of Nvidia GPUs. Kepler based members of the 600 series add the following standard features to the GeForce family:

  • Microsoft Direct3D 11, with partial support of Direct3D 11.1
  • PCI Express 3.0 interface
  • DisplayPort 1.2
  • HDMI 1.4a 4K x 2K video output
  • Purevideo VP5 hardware video acceleration (up to 4K x 2K H.264 decode)
  • Hardware H.264 encoding acceleration block (NVENC)
  • Support for up to 4 independent 2D displays, or 3 stereoscopic/3D displays (NV Surround)
  • Bindless Textures
  • CUDA Compute Capability 3.0
  • Manufactured by TSMC on a 28 nm process
  • GPU Boost

GPU Boost is a new feature which is roughly analogous to turbo boosting of a CPU. The GPU is always guaranteed to run at a minimum clock speed, referred to as the "base clock". This clock speed is set to the level which will ensure that the GPU stays within TDP specifications, even at maximum loads.[1] When loads are lower, however, there is room for the clock speed to be increased without exceeding the TDP. In these scenarios, GPU Boost will gradually increase the clock speed in steps, until the GPU reaches a predefined power target (which is 170W by default).[2] By taking this approach, the GPU will ramp its clock up or down dynamically, so that it is providing the maximum amount of speed possible while remaining within TDP specifications.

The power target, as well as the size of the clock increase steps that the GPU will take, are both adjustable via third-party utilities and provide a means of overclocking Kepler-based cards.[1]

Microsoft Direct3D Support

[edit]

Nvidia Kepler GPUs fully support Direct3D 11 and partially support Direct3D 11.1.[3] The following features of Direct3D 11.1 are not supported by Kepler:

  • Target-Independent Rasterization (2D rendering only).
  • 16xMSAA Rasterization (2D rendering only).
  • Orthogonal Line Rendering Mode.
  • UAV (Unordered Access View) in non-pixel-shader stages.

New Driver Features

[edit]

In the R300 drivers, released alongside the GTX 680, NVidia introduced a new feature called Adaptive VSync. This feature is intended to combat the limitation of v-sync that, when the framerate drops below 60 FPS, there is stuttering as the v-sync rate is reduced to 30 FPS, then down to further factors of 60 if needed. However, when the framerate is below 60 FPS, there is no need for v-sync as the monitor will be able to display the frames as they are ready. To address this issue (while still maintaining the advantages of v-sync with respect to screen tearing), Adaptive VSync can be turned on in the driver control panel. It will enable VSync if the framerate is at or above 60 FPS, while disabling it if the framerate lowers. NVidia claims that this will result in a smoother overall display.[1]

While the feature debuted alongside the GTX 680, this feature is available to users of older NVidia cards who install the updated drivers.[1]

History

[edit]

In September 2010, Nvidia first announced the new architecture.[4]

In early 2012, details of the first members of the 600 series parts emerged. These initial members were entry-level laptop GPUs sourced from the older Fermi architecture.

On March 22, 2012, the first Kepler products joined the 600 series: the GTX 680 for desktop PCs and the GeForce GT 640M, GT 650M, and GTX 660M for notebook/laptop PCs. The GK104 (which powers the GTX680) has 1536 CUDA cores, in eight groups of 192, and 3.5 billion transistors. The GK107 (GT 640M/GT 650M/GTX 660M) has 384 CUDA cores.

On April 29, 2012, the first dual GPU Kepler product joined the 600 series. The GTX 690 has two of the GTX 680's GPUs, equalling 3072 CUDA cores and 512-bit memory.

On May 10, 2012, GTX 670 joined the series. The card features 1344 CUDA cores, 2GB GDDR5 VRAM and 256-bit memory bus.

On June 4, 2012, GTX 680M joined the series. This mobile GPU based on the powerful GTX 670 features 1344 CUDA cores, 4GB GDDR5 VRAM & 256-bit memory bus.

On August 16, 2012, GTX 660 Ti joined the series. The card has 1344 CUDA cores along with 2GB GDDR5 VRAM and 192-bit memory bus.

On September 13, 2012, GTX 660 and GTX 650 joined the series. The GTX 660 has 960 CUDA cores and the GTX 650 has 384 CUDA cores. 2GB GDDR5 VRAM and a 192-bit memory bus for the GTX 660 and 1GB GDDR5 VRAM and a 128-bit memory bus for the GTX 650.

On October 9, 2012, GTX 650 Ti joined the series. The card features 768 CUDA cores along with 1GB GDDR5 VRAM and 128-bit memory bus.[5]

Products

[edit]
  • 1 SPs - Shader Processors - Unified Shaders (Vertex shader / Geometry shader / Pixel shader) : TMUs - Texture mapping units : Render Output unit
  • 2 The GeForce 605 (OEM) card is a rebranded GeForce 510.
  • 3 The GeForce GT 610 card is a rebranded GeForce GT 520.
  • 4 The GeForce GT 620 (OEM) card is a rebranded GeForce GT 520.
  • 5 The GeForce GT 620 card is a rebranded GeForce GT 530.
  • 6 The GeForce GT 630 (DDR3) card is a rebranded GeForce GT 440 (DDR3).
  • 7 The GeForce GT 630 (GDDR5) card is a rebranded GeForce GT 540 (GDDR5).
  • 8 The GeForce GT 640 (OEM) card is a rebranded GeForce GT 545 (DDR3).
  • 9 The GeForce GT 645 (OEM) card is a rebranded GeForce GTX 560 SE.
Model Launch Code name Fab (nm) Transistors (Million) Die Size (mm2) Die Count Bus interface Memory (MiB) SM count Config core 1 Clock rate Fillrate Memory Configuration API support (version) GFLOPS (FMA) TDP (watts) GFLOPS/W Release Price (USD)
Core (MHz) Average Boost (MHz) Max Boost (MHz) Shader (MHz) Memory (MHz) Pixel (GP/s) Texture (GT/s) Bandwidth (GB/s) DRAM type Bus width (bit) DirectX OpenGL OpenCL
GeForce 605 2 April 3, 2012 GF119 40 Un­known 79 1 PCIe 2.0 x16 512

1024

1 48:8:4 523 1046 1796 2.1 4.3 14.4 DDR3 64 11 4.3 1.1 100.4 25 4.02 OEM
GeForce GT 610 3 May 15, 2012 GF119 40 Un­known 79 1 PCIe 2.0 x16, PCI 1024 1 48:8:4 810 1620 1800 3.24 6.5 14.4 DDR3 64 11 4.3 1.1 155.5 29 5.36 Retail
GeForce GT 620 4 April 3, 2012 GF119 40 Un­known 79 1 PCIe 2.0 x16, PCI 512

1024

1 48:8:4 810 1620 1798 3.24 6.5 14.4 DDR3 64 11 4.3 1.1 155.5 30 5.18 OEM
GeForce GT 620 5 May 15, 2012 GF108 40 585 116 1 PCIe 2.0 x16, PCI 1024 2 96:16:4 700 1400 1800 Un­known 11.2 14.4 DDR3 64 11 4.3 1.1 268.8 49 5.49 Retail
GeForce GT 630 April 24, 2012 GK107 28 1300 118 1 PCIe 3.0 x16 1024
2048
1 192:16:16 875 Un­known Un­known 875 1782 7 14 28.5 DDR3 128 11 4.3 1.2 336 50 6.72 OEM
GeForce GT 630 (DDR3) 6 May 15, 2012 GF108 40 585 116 1 PCIe 2.0 x16, PCI 1024 2 96:16:4 810 1620 1800 Un­known 13 28.8 DDR3 128 11 4.3 1.1 311 65 4.79 Retail
GeForce GT 630 (GDDR5) 7 May 15, 2012 GF108 40 585 116 1 PCIe 2.0 x16, PCI 1024 2 96:16:4 810 1620 3200 Un­known 13 51.2 GDDR5 128 11 4.3 1.1 311 65 4.79 Retail
GeForce GT 640 8 April 24, 2012 GF116 40 1170 238 1 PCIe 2.0 x16 1536
3072
3 144:24:24 720 1440 1782 Un­known 17.3 42.8 DDR3 192 11 4.3 1.1 414.7 75 5.53 OEM
GeForce GT 640 (DDR3) April 24, 2012 GK107 28 1300 118 1 PCIe 3.0 x16 1024
2048
2 384:32:16 797 Un­known Un­known 797 1782 12.8 25.5 28.5 DDR3 128 11 4.3 1.2 612.1 50 12.24 OEM
GeForce GT 640 (GDDR5) April 24, 2012 GK107 28 1300 118 1 PCIe 3.0 x16 1024
2048
2 384:32:16 950 Un­known Un­known 950 5000 15.2 30.4 80 GDDR5 128 11 4.3 1.2 729.6 75 9.73 OEM
GeForce GT 640 (DDR3) June 5, 2012 GK107 28 1300 118 1 PCIe 3.0 x16 2048 2 384:32:16 900 Un­known Un­known 900 1800 14.4 28.8 28.5 DDR3 128 11 4.3 1.2 691.2 65 10.63 $99
GeForce GT 645 9 April 24, 2012 GF114 40 1950 332 1 PCIe 2.0 x16 1024 6 288:48:24 776 1552 3828 Un­known 37.3 91.9 GDDR5 192 11 4.3 1.1 894 140 6.39 OEM
GeForce GTX 650 September 13, 2012 GK107 28 1300 118 1 PCIe 3.0 x16 1024 2 384:32:16 1058 1058 5000 16.9 33.9 80 GDDR5 128 11 4.3 1.2 812.5 64 12.7 $109
GeForce GTX 650 Ti October 9, 2012 GK106 28 2540 221 1 PCIe 3.0 x16 1024
2048
4 768:64:16 928 925 5400 14.8 59.2 86.4 GDDR5 128 11 4.3 1.2 1425.4 110 12.96 $149
GeForce GTX 660 (OEM) August 22, 2012 GK104 28 3540 294 1 PCIe 3.0 x16 1536
3072
6 1152:96:24 823 888 Un­known 823 5800 19.8 79 134 GDDR5 192 11 4.3 1.2 1896.2 130 14.59 OEM
GeForce GTX 660 September 13, 2012 GK106 28 2540 221 1 PCIe 3.0 x16 2048 5 960:80:24 980 1033 1033 980 6000 22[6] 78.5 144.2 GDDR5 192 11 4.3 1.2 1881.6 140 13.44 $229
GeForce GTX 660 Ti August 16, 2012 GK104 28 3540 294 1 PCIe 3.0 x16 2048 7 1344:112:24 915 980 915 6000 22[6] 102.5 144.2 GDDR5 192 11 4.3 1.2 2459.5 150 16.4 $299
GeForce GTX 670 May 10, 2012 GK104 28 3540 294 1 PCIe 3.0 x16 2048
4096[7]
7 1344:112:32 915 980 1084 915 6000 29.3 102.5 192.3 GDDR5 256 11 4.3 1.2 2459.5 170 14.47 $399
GeForce GTX 680 March 22, 2012 GK104 28 3540 294 1 PCIe 3.0 x16 2048
4096
8 1536:128:32 1006[1] 1058 1110 1006 6000 32.2 128.8 192.3 GDDR5 256 11 4.3 1.2 3090.4 195 15.85 $499
GeForce GTX 690 April 29, 2012 2× GK104 28 2× 3540 2× 294 2 PCIe 3.0 x16 2× 2048 2× 8 2× 1536:128:32 915 1019 1058[8] 915 6000 2× 29.3 2× 117.1 2× 192.3 GDDR5 2× 256 11 4.3 1.2 2× 2810.9 300 18.74 $999
Model Launch Code name Fab (nm) Transistors (Million) Die Size (mm2) Die Count Bus interface Memory (MiB) SM count Config core 1 Clock rate Fillrate Memory Configuration API support (version) GFLOPS (FMA) TDP (watts) GFLOPS/W Release Price (USD)
Core (MHz) Average Boost (MHz) Max Boost (MHz) Shader (MHz) Memory (MHz) Pixel (GP/s) Texture (GT/s) Bandwidth (GB/s) DRAM type Bus width (bit) DirectX OpenGL OpenCL

Chipset table

[edit]

See also

[edit]

References

[edit]
  1. ^ a b c d e f g "NVIDIA GeForce GTX 680 Whitepaper.pdf" (PDF). ( 1405KB), page 6 of 29
  2. ^ a b c Smith, Ryan (22 March 2012). "NVIDIA GeForce GTX 680 Review: Retaking The Performance Crown". AnandTech. Retrieved 11/25/2012. {{cite web}}: Check date values in: |accessdate= (help)
  3. ^ Edward, James (22 November 2012). "NVIDIA claims partially support DirectX 11.1". TechNews.
  4. ^ Yam, Marcus (22 September 2010). "Nvidia roadmap". Tom's Hardware US.
  5. ^ http://www.geforce.com/whats-new/articles/nvidia-geforce-gtx-650-ti/
  6. ^ a b http://www.hardwareluxx.com/index.php/reviews/hardware/vgacards/21608-test-nvidia-geforce-gtx-660.html
  7. ^ http://www.evga.com/Products/Product.aspx?pn=04G-P4-2673-KR
  8. ^ http://www.anandtech.com/show/5805/nvidia-geforce-gtx-690-review-ultra-expensive-ultra-rare-ultra-fast/17
[edit]

Category:Nvidia Category:Graphics chips Category:2012 introductions