Performance per what?

Written by Tim Smalley

May 20, 2007 | 18:22

Tags: #consumption #efficiency #energy #increasing #per #performance #power #requirements #watt

With global warming becoming an increasingly popular political issue in the World, companies in the technology industry have really started to make excessive use of the term “performance per watt” with the hope of being seen as more energy efficient than their competitors.

Performance per Watt is a great measurement in many ways, but it’s also a massively flawed measurement and I believe the flaws are actually very bad for this industry.

Let’s start with the CPU manufacturers, because they have been responsible for a large percentage of any system’s power draw since the day the megahertz race started. Out of the two, Intel has been the biggest culprit of poor energy efficiency in the past when its 130nm to 90nm die shrink actually resulted in a 25 percent increase in thermal design power (TDP).

"Both ATI and Nvidia seem to keep gobbling up any of the power savings made by the CPU guys."

Prescott, or Preshott as some coined it, had awful power leakage that couldn’t be completely removed even after Cedar Mill – Intel’s shrink to 65nm. Cedar Mill was essentially the same chip as the Prescott, with many of the leakage problems fixed but it still had a TDP of 82W. This was the same power draw as the 130nm Northwood chip. In essence, Intel had made two die shrinks and didn’t reduce power consumption at all.

There was another thing that was going to make some more dramatic power increases though – the multi-core era. This took the chips’ TDP up to 130W and beyond.

There’s no doubting that multi-core processors are brilliant things (I now have one in every machine I use), but there was a problem with dual-core desktop processors before Intel's Core 2.

With both the Athlon 64 X2 and Intel's Pentium D series processors, the whole chip operated at its maximum speed whenever it was loaded. It didn't matter whether only one core or all of the cores in the CPU were being loaded. This is inefficient, no matter which way you look at it.

Independent per core power management is currently only available on Intel's Core 2 processors, but it will be a big part of AMD's native quad-core processor architecture too. It's an area where more power can be saved, especially in the early part of the multi-core era where many applications still don't support multi-threading.

"There's no doubting that unified shader architectures make more efficient GPUs, but there needs to be something put in place to stop power requirements getting out of hand."

While the CPU manufacturers are slowly moving in the right direction, system power requirements keep going up because of the march to reality in the GPU market – both ATI and Nvidia seem to keep gobbling up any of the power savings made by the CPU guys.

Don't get me wrong here, I want to see graphics technology keep moving forwards and there's no doubting that unified shader architectures make more efficient GPUs. However, there needs to be some limitations put in place to prevent things from getting out of hand.

The thing that irks me the most though is the fact that both ATI and Nvidia seem to love talking about better Performance per Watt, despite the fact they're increasing power consumption year on year.

Take both the GeForce 8800 GTX and Radeon HD 2900 XT, which have both taken performance onto another level... at the expense of higher power requirements. The GeForce 8800 GTX has a claimed peak power consumption of 185W, while the Radeon HD 2900 XT is said to peak at almost 215W.


[separator]
The latter was the first GPU to introduce the 8-pin PCI-Express 2.0 power connector. This is a requirement if you're going to overclock the Radeon HD 2900 XT, because the card's power consumption is already close to the 225W maximum draw through a PCI-Express 1.0a slot (75W) and two 6-pin connectors (2x75W).

In our R600 evaluation, we decided to measure just how bad these power consumption numbers were. In the same motherboard, we found the Radeon HD 2900 XT was slightly more power hungry than a GeForce 8800 Ultra under load.

Even worse was the fact that the GeForce 8800 GTX consumed 33W less power than the HD 2900 XT, putting it in line with the peak power consumption numbers supplied by ATI and Nvidia. The problem for the HD 2900 XT though is that the 8800 GTX delivered a better gaming experience across just about every game we tested. The only exception to this rule was Quake 4, where the 2900 XT was marginally faster at 2560x1600 with 4xAA 8xAF enabled.

"The performance of the ATI Radeon HD 2900 XT is more in line with the BFGTech 8800 GTS OC 640MB, which uses 62W less power at load."

For the most part though, the performance of the ATI Radeon HD 2900 XT is more in line with the pre-overclocked BFGTech 8800 GTS OC 640MB, which uses 62W less power at load. Compared to the previous generation of hardware, the Radeon HD 2900 XT uses 67W more power than a Radeon X1950 XTX and almost 100W more than the 7900 GTX.

Obviously, performance is higher on both the GeForce 8800 GTS and the Radeon HD 2900 XT, but these power consumption numbers are still pretty frightening.

It’s not all bad though, because both ATI and Nvidia have worked on idle power consumption with improved power management techniques that helped to keep these GPUs under reasonable control when they’re not doing anything. At idle, ATI has done a better job than Nvidia, with the HD 2900 XT using less power than a GeForce 8800 GTS 640MB, but both graphics cards still use more power than the previous generation.

When is this continual increase in power requirements going to level off, or maybe even stop?

Only the hardware manufacturers can answer that question, but I hope it is sooner rather than later. These two-fold increases in graphics performance are great for gaming (and let's not forget cooling and power supply manufacturers too), but they're not great for our environment – no matter how great the Performance per Watt is compared to the previous generation.

At the least, let's improve idle power consumption much more than we're seeing at the moment. I can't see either G80 or R600 (in their current form) being used in notebooks because they're too power hungry even in their idle states – there's a reason why neither company has announced high-end DirectX 10 graphics solutions for notebooks.

Come on guys, let's start talking about two-fold improvements in performance per watt at either the same or lower power consumption than the previous generation. Let's stop the increase and start thinking about the planet a bit more because I'm fed up of asking "performance per what?" every time I'm told Performance per Watt has increased massively.
Discuss this in the forums
YouTube logo
MSI MPG Velox 100R Chassis Review

October 14 2021 | 15:04