As with the Radeon 6900/7900 series and GTX 500 series the 600 series performs power management in a different way to older cards. In fact the GTX 680 even more so than the others which means we have changed the way we test power and temperature levels in our reviews. For idle power we list the full system use at the wall after sitting at the desktop with no activity for 5 minutes. Load power is the highest reading we saw for the full system when testing during this review. Temperatures are taken in the same way. Noise levels are taken after a period of prolonged gaming in a scenario which applied maximum load to the GPU.
NVIDIA GeForce GTX 680
MSI R7970 Lightning MaxOC
Reference Radeon HD 7970
GeForce GTX 580 OC
Looking at the results above we see that NVIDIA produce some impressive performance from their reference design with the card idle performance. Power use matches the 7970 and improves on the GTX 580. Temperatures are comparable with a custom 7970 using a dual fan cooler. At load the results only get better with the GTX 680 requiring nearly 100w less power than the GTX 580. It needs 50w less than a reference 7970 too and runs at a similar temperature.
Obviously this change in power use over the GTX 580 is quite significant so how do NVIDIA achieve it? The first change is through the new SMX components within our core. Instead of running a 2x clock on shaders (maximising performance in the die space at the cost of power draw) as was the case with Fermi cards the Kepler/GTX 680 uses more CUDA processors at a lower speed. Each SMX within the GTX 680 contains 192 CUDA cores, 6x the number per SM in GTX 580 along with 16 texture units and a Polymorph Engine.
Mixed with this specification is hardware monitoring which looks at the conditions within the card such as GPU load and power use, balancing the clock speed to suit the situation. Each GTX 680 at launch is guaranteed to run at 1006MHz under load. When an application doesn’t use the full power available to the GPU our GTX 680 will automatically increase the speed of the core to a limit that remains within the power requirements of the card, with the average increase known as the boost clock and rated at launch for 1058MHz. Here is an example of this technology at work:
Shown above are two images. The first is EVGA’s PrecisionX tool which allows us to monitor and control the card. In the first image we can tweak the power available to the card as well as the core/memory speed offset to overclock, more on that shortly. In the second image we can see the graphs which show the power draw and speeds as we open and ran 3DMark 11. The three key results are the top items with the first line showing the power use of the card, the second the clock speed that the GTX 680 set itself to and the third our memory speed which varies less.
Taking the technology a step further we have the aforementioned power/speed controls. By increasing the power available to the card and the speed used as our GPU/Memory offset we can overclock the maximum speed of the card. In the first graph we have the power/speed lines shown for a Battlefield 3 session, the second graph is a continuation where we applied a 132% setting to power and approx. 100MHz on GPU and Memory. Following the change in settings the GPU speed registers as 1197MHz (rather than 1097MHz in graph1) and memory as 3110MHz compared to 3005MHz. Of course the fluctuation in the speeds will depend on the game played and how much power it requires.
Then key point to take away is that the card will automatically give us the best balance of power draw and speed, never falling below 1006MHz unless the protection circuitry kicks in (e.g. The fan stops).
Before we look at our maximum overclock on the reference GTX 680 here is an example of the tone produced by the fan at full load, taken at a distance of around 1ft in an open environment.