bit-tech.net

GPUs 'only' 14 times faster than CPUs

GPUs 'only' 14 times faster than CPUs

An Intel study into GPGPU computing came up with an interesting result: it beats CPUs. Whoops.

Graphics cards are inherently good at parallel processing tasks: it's long been considered true, but support for the theory has come from an unlikely source - CPU manufacturer Intel.

As reported over on iTworld, the chip giant set out to disprove the myth that GPUs offer a 100x speed boost in parallel processing tasks over a CPU - in other words, attempting to continue selling its top-end CPUs rather than seeing its high-performance computing clusters move to GPU-based supercomputers such as the FASTRA II.

While Intel successfully debunked the myth that moving your parallel processing tasks onto the GPU via CUDA or OpenCL would net you a 100x performance boost, it failed to show that there was no performance boost. Rather, the final figures demonstrated that the Nvidia GeForce GTX 280 used in the test out-performed the Core i7 960 3.2GHz processor by a margin of 2.5x on average - with certain functions running up to fourteen times faster on the GPU than the CPU.

That's an embarrassing result for Intel - and doubly so when you realise that the graphics card used was released almost exactly two years ago in June 2008, whereas the CPU is from October 2009.

Nvidia, naturally, is crowing about the test results, with the company's general manager of GPU computing Andy Keane blogging "it's a rare day in the world of technology when a company you compete with stands up at an important conference and declares that your technology is only up to 14 times faster than theirs."

Despite Intel's testing results, the day when we can ditch our CPU is far from here quite yet: while GPUs show great improvements in massively parallel tasks, a lot of day-to-day computing is serial in nature - and thus runs faster on a CPU.

Are you shocked to see that an elderly Nvidia graphics card can beat one of Intel's more recent processors, or did you always know that GPGPU computing was the future? Share your thoughts over in the forums.

39 Comments

Discuss in the forums Reply
rickysio 25th June 2010, 10:37 Quote
What happens if you use a GTX480?

Utter annihilation?
Lizard 25th June 2010, 10:51 Quote
Quote:
Originally Posted by rickysio
What happens if you use a GTX480?

The atmosphere catches on fire and the world ends :)
EvilMerc 25th June 2010, 11:05 Quote
Not particularly surprising tbh. Didn't expect the 14x boost over a CPU but it was always going to be a significant margin.
Lizard 25th June 2010, 11:11 Quote
More seriously, there doesn't appear to be any explanation of what specific tasks, applications, drivers and libraries were used to get these results - all of which will massively effect the result.

i.e. pre mid-2009 a GTX 285 would easily outperform a Core i7 in Folding@home, but after that (with the release of new clients) the CPU would be faster.

As always, it depends on what your doing with your hardware.
mjb501 25th June 2010, 11:30 Quote
The results seem to tie in with the preformance gains I have seen doing GPGPU at uni, yes the GPU is a lot faster at certain tasks but there are others such as branching where the performace gain is significantly less plus the fact that existing software would have to be recoded for x86 to CUDA/OpenCL.

I'd like to see a comparision of performance per watt of the CPU and GPU running these tasks.
BlackMage23 25th June 2010, 11:33 Quote
Does that mean that Intel just owned themselves?
eddtox 25th June 2010, 11:38 Quote
Does this imply that there are lessons to be learned from gpgpu's for cpu manufacturers? Is it possible to apply knowledge from one field to the other and narow the gap? AMD would be in a great position to do this.
memeroot 25th June 2010, 11:42 Quote
@Lizard - I thought the new clients were a little bit dishonest regarding cpu/gpu performance
@mjb501 - performance per watt would be interesting but I think we can guess given that you'd need a <5* performance improvement to make it worth while
@rickysio - you'd have to hope so given thats what the 480 was designed for... heck it's faster than a 5870 which was 'just' designed for graphics.
Lizard 25th June 2010, 12:03 Quote
Quote:
Originally Posted by memeroot
@Lizard - I thought the new clients were a little bit dishonest regarding cpu/gpu performance.

In what way? Do you mean how the apparent performance (if you're measuring in ppd) of the clients has varied over time as Stanford adjusts the points system?
Shagbag 25th June 2010, 12:11 Quote
I'm amazed Intel fessed up to it in the first place.
Marketing wise, they've shot themselves in the foot with both barrels.
I have absolutely no doubt that they loaded each test with as much bias as possible.
The fact that they used a GPU that was 1 year older that their CPU is proof in point.
To then come out and say the GPU was at least 2.5 times a fast as their CPU (at parallel processing tasks) is amazing.

One thing is for sure: no way in Hell would Apple's Marketing Dept allow such a test result to be released.
Phil Rhodes 25th June 2010, 12:18 Quote
Quote:
As always, it depends on what your doing with your hardware.

Quite so. Actually, I'm surprised it's only 14 times; for graphics I suspect it would really be quite a lot more than that, frame rate for frame rate (though a CPU based graphics engine would likely give more accurate results, if you care).

P
rickysio 25th June 2010, 13:05 Quote
Quote:
Originally Posted by Shagbag
I'm amazed Intel fessed up to it in the first place.
Marketing wise, they've shot themselves in the foot with both barrels.
I have absolutely no doubt that they loaded each test with as much bias as possible.
The fact that they used a GPU that was 1 year older that their CPU is proof in point.
To then come out and say the GPU was at least 2.5 times a fast as their CPU (at parallel processing tasks) is amazing.

One thing is for sure: no way in Hell would Apple's Marketing Dept allow such a test result to be released.

It's not really marketing - it's a paper for discussion by experts.
memeroot 25th June 2010, 13:12 Quote
@Lizard

yep thats what I meant - though I only picked up the info from the forum - so I dont honestly know if it is true.
Centy-face 25th June 2010, 13:39 Quote
Quote:
Originally Posted by Lizard
Quote:
Originally Posted by rickysio
What happens if you use a GTX480?

The atmosphere catches on fire and the world ends :)

Why do I get an image of Charlton Heston screaming on a beach looking up at a huge 480
cgthomas 25th June 2010, 14:03 Quote
Quote:
Originally Posted by BlackMage23
Does that mean that Intel just owned themselves?

Employee: Doc, I've just pwned myself
Prof: so what, you're a nerd anyway
Fizzban 25th June 2010, 15:32 Quote
Not surprised really. Only surprised Intel has let this information out.
delriogw 25th June 2010, 16:52 Quote
credit to them for not trying to hide it to be honest.

they can learn from this as can the industry in general, it also shows the difference isn't as large as a lot of people thought, which in intels eyes is of course a positive (and so it should be).

@ shagbag : the fact that apple wouldn't release this says more about apple than it does about intel
wuyanxu 25th June 2010, 17:45 Quote
expected result for a 3.2GHz i7 vs gtx280, in fact, 2.5x is about the speed difference i get when encoding a video on i7 860 and gtx260.

what Intel should have showed is single threaded performance, or heavily branching based performance.
bogie170 25th June 2010, 18:01 Quote
I tried putting a Nvidia GeForce GTX 280 in my CPU socket but it didn't work. Can anyone help me?
Faulk_Wulf 25th June 2010, 18:16 Quote
n00b question. But if GPUs are so much faster-- why haven't we adapted them to replace CPUs as the main processor for tasks? Is it like a standard-vs-metric type of thing where its just completely incompatible? Just seems like it'd be a massive upgrade.
thehippoz 25th June 2010, 19:00 Quote
I like how they used the term 'crowing' when talking about nvidia.. that sounds about right :D
Sloth 25th June 2010, 19:11 Quote
Quote:
Originally Posted by Faulk_Wulf
n00b question. But if GPUs are so much faster-- why haven't we adapted them to replace CPUs as the main processor for tasks? Is it like a standard-vs-metric type of thing where its just completely incompatible? Just seems like it'd be a massive upgrade.
My own understanding is that
a) People don't like change. It'd require a massive switch for the entire physical structure of the PC.
b) CPUs are still better at a variety of tasks, tasks which are still quite common today. Most applications currently available and used can't support the highly parallel nature of a GPU.
c) Standards such as CUDA and OpenCL need to be more widely developed/adopted.
d) Intel and AMD like money.
HourBeforeDawn 25th June 2010, 19:57 Quote
well this makes since, in all the apps I saw that took advantage of a GPU processed their task about 14-20 times faster then on the CPU so the numbers seem spot on. I dont really see the CPU ever going away, it will probably end up being the primary chip on mobos, like the CPU, North Bridge, South Bridge will all be in that one modular chip, which for the most part where we are heading anyways.
yougotkicked 25th June 2010, 20:53 Quote
for a while now i have had a 'vision' of the future of computing, the way i see it, GPU's are a lot more powerful, but are less versatile. CPU's, while slower, are extremely adaptable. i honestly think that in 10/15 years, we will stop thinking of GPU's as graphical processors, and start seeing them as specialized processors, responsible for all types of digital heavy lifting, while the standard CPU will have many more cores than todays, and will be responsible for lower-level computations, and managing the GPU's workload.
Elton 25th June 2010, 21:01 Quote
Parallelism. Imagine programming something that uses 320 threads.

Hell imagine the time it would take to program something with 64 threads.
Mraedis 25th June 2010, 21:29 Quote
Quote:
Originally Posted by Elton
Parallelism. Imagine programming something that uses 320 threads.

Hell imagine the time it would take to program something with 64 threads.

Of course, you could be smart and have it assign an 'empty' thread automatically, over thread(1) or stuff :D
leexgx 25th June 2010, 21:40 Quote
GTX480 does about 15-16k when its working correctly (some times does 10k when VMware folding is running restarting the GPU client brings it back to full speed again)
with the nerfed A3 bigadv work units the CPU i7 clocked at about 3.8-4ghz douls around 20k PPD (26k before with the A2 work units)

good thing is thought the nerf to points seems global as the GPU3 comes into play thats 20% slower, A3 norm work units are about 20-40% slower and Bigadv A3 is 20-30% slower
Glix 25th June 2010, 21:47 Quote
Remember the days that the CPU did all the work on it's own? I do and the nightmare that followed. xD
Star*Dagger 25th June 2010, 22:33 Quote
WOW something that nVidia can do that is FAST, maybe they can sell millions of their cards to researchers because Gamers™ know enough to buy ATI

.-
VicDiesel 26th June 2010, 02:46 Quote
Quote:
Originally Posted by Phil Rhodes
Actually, I'm surprised it's only 14 times;

Up to 14 times. Up to. The average was 2.5.

And they didn't report how much effort it took to recode those benchmarks. A simple port will probably run abysmally. You really have to work hard to get those hundreds of threads running that the GPU requires.

So if you have that one kernel (probably graphics) that speeds up bigtime, go for it. If you have something else, think very hard if you want to invest a couple of weeks/months in rewriting your code for a small gain.

V.
VicDiesel 26th June 2010, 02:51 Quote
Quote:
Originally Posted by Faulk_Wulf
n00b question. But if GPUs are so much faster-- why haven't we adapted them to replace CPUs as the main processor for tasks? Is it like a standard-vs-metric type of thing where its just completely incompatible? Just seems like it'd be a massive upgrade.

That is because software has to be massively rewritten to work efficiently on a GPU. With regular CPUs if the clock got faster, the application got faster. No work required. If the core count goes up, you have to do some stuff with threading before you see a gain. But that's manageable. With GPUs you basically have to recode your application.

If you're a relatively small application and it happens to be one that gets good speed up (read, you're a game and graphics determines your speed) then you'll invest the effort. If you're something like MS Windows, you'll never run on a GPU. Too much work and no gain.

V.
MajestiX 26th June 2010, 10:39 Quote
isnt this why intel is playing with multicore atom cpus?

keep the clock rate the same pump up the numbers of cores by 100's or shrink the chip.

they are doing a lot of R&D to test new market, a company that size aint going to roll over with a few bad hands.
Bakes 26th June 2010, 17:53 Quote
Quote:
Originally Posted by Phil Rhodes
Quote:
As always, it depends on what your doing with your hardware.

Quite so. Actually, I'm surprised it's only 14 times; for graphics I suspect it would really be quite a lot more than that, frame rate for frame rate (though a CPU based graphics engine would likely give more accurate results, if you care).

P

Hence why we use CPUs for general processing and GPUs for graphics processing.
Quote:
Originally Posted by Faulk_Wulf
n00b question. But if GPUs are so much faster-- why haven't we adapted them to replace CPUs as the main processor for tasks? Is it like a standard-vs-metric type of thing where its just completely incompatible? Just seems like it'd be a massive upgrade.

Whilst the processing units are getting more powerful, there are still loads of CPU functions and capabilities that GPUs simply cannot handle.
Quote:
Originally Posted by Elton
Parallelism. Imagine programming something that uses 320 threads.

Hell imagine the time it would take to program something with 64 threads.

This paper was designed to try and persuade people that Intel chips were faster in large supercomputers and clusters, where you might see over 20,000 cores.
Quote:
Originally Posted by wuyanxu
expected result for a 3.2GHz i7 vs gtx280, in fact, 2.5x is about the speed difference i get when encoding a video on i7 860 and gtx260.

what Intel should have showed is single threaded performance, or heavily branching based performance.

Single threaded performance is pointless, simply because it's not representative of real world use. This is not a benchmark designed to interest geeks at their computers comparing Apples to Oranges, it's a scientific paper to convince people that when they're making their large processing clusters, they should use lots of Intel CPUs rather than nVidia GPUs.

With regard to video encoding, I assume you use Badaboom. The reason why it's much faster is that they compromise massively on quality. If you drop the settings in (say) Handbrake to a comparable level, you're going to have roughly the same speed in either, but as soon as you try to crank up the image quality to HD level, the GPU will be a long way behind.

To Gareth: I'm disappointed in you. The headline borders on the sensationalist, when even in your own article you write that it's only in one of the tests. It's a bit like saying 'i7 3000 times slower than GTX480' when the test you did was rendering Crysis. 2.5x faster is the actual figure, according to the study' so for you to say that it's 14x faster is basically taken straight from the Daily Mail Handbook of sensationalist headlines!
knutjb 26th June 2010, 21:21 Quote
I agree with Bakes that this is a scientific exercise that will guide Intel on both marketing and future product development. Intel and AMD have a much greater ability to impact the market than does Nvidia. GPU only can only stand on its own in a limited way. Nvidia has made progress but its not a broad spectrum tech at this time and I don't think they will make it that way.

Don't think that Intel will take this lying down they have the resources to to adapt. AMD was thought to be crazy in buying ATI. Nvidia might want to push CUDA over competitors but I don't think they can because they aren't in a position to overthrow Intel and their presence in the software side. I think AMD/ATI might be in the best position to integrate CPU/GPU. They have a CPU that Nvidia doesn't have and far more experience than Intel in GPUs.

In the real world these ideas and exercises don't always make it to market but they usually do have a significant impact on future hardware and software architecture. I do think GPU-CPU integration will happen but the GPUs will be used to increase parallel data processing performance and not specifically for graphics output. Think of the money ATI and Nvidia make on graphics cards, they won't want to give up those highly profitable margins. Plus high performance GPUs generate a lot of heat and how much thermal density can a CPU-GPU integrated package handle? If it takes up too much real estate it will likely not push GPU cards off the board. In the mass corporate world with low demands on GPUs it will make a lot of sense to further integrate more, if not all, functions onto a single chip where thermal limits aren't pushed and power consumption is a major consideration.
Gradius 28th June 2010, 00:42 Quote
14x is only for some cases.

In reality we're 2.5x times slower than we should be.

Way to go Intel. :/
metarinka 28th June 2010, 05:56 Quote
I think people also forget that mass parallelization isn't extremely practical for a lot of everyday applications. Some tasks simply cannot be parallelized because you're waiting on data from a previous operation.

Now when we are talking about computer clusters that handle large data sets than threading becomes relevant. Seti @ home and folding @ home are evidence of the types of tasks that can be parallelized to the nth degree and still seek performance gains. Some tasks can only be reduced so much before you don't gain anything.


I actually forsee the whole industry moving towards threading and it will take the hardware software and people (coders etc) a good 10-15 years for good threading practice and standards to come into place. If you look at all the CPU roadmaps they are moving to 6-8+ cores. I think we'll see sometype of hybrid hardware that can scale down to a few very fast cores for serial computing and scale up to N cores for massively parallel applications. Ala Joining a CPU and GPU into one die.
TheMusician 28th June 2010, 06:33 Quote
Quote:
Originally Posted by Faulk_Wulf
n00b question. But if GPUs are so much faster-- why haven't we adapted them to replace CPUs as the main processor for tasks? Is it like a standard-vs-metric type of thing where its just completely incompatible? Just seems like it'd be a massive upgrade.

We're getting there. Flash 10.1 is just the start.
Splynncryth 28th June 2010, 15:58 Quote
This just seems to be the next processing architecture argument. There may be some valid parallels from the discussion of x86 vs EPIC years ago.

But computers are more than the underlying hardware, they are about software too. You need a solid community of systems programmers and tool authors to make the platform work. To me, this is why AMD's x64 won out over Intel's IA64.
It's the same story with the GPGPU idea.
Bakes 28th June 2010, 16:55 Quote
Quote:
Originally Posted by Splynncryth
But computers are more than the underlying hardware, they are about software too. You need a solid community of systems programmers and tool authors to make the platform work. To me, this is why AMD's x64 won out over Intel's IA64.
It's the same story with the GPGPU idea.

AMD64 won because it was backwards compatible with x86.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums