bit-tech.net

What is the best graphics card for folding?

Comments 1 to 25 of 63

Reply
badsector 5th August 2010, 13:17 Quote
I use 2 GTX460 (EVGA SC [overclocked] 768) in a i7 system and get approx 9600PPD each card.
Seemed better value for money than a single GTX480. Not sure why my figures are higher, but are reported by FAHmon (include bonus points?).
Xir 5th August 2010, 13:20 Quote
Quote:
Although you can fold for a few hours at day, you'll really help the project (and therefore medical science) if you leave the folding client running 24\7.
@300W per hour? That's 7,2 KWh per day...(about one euro). ;)

How does the SETI client run? :D
Kúsař 5th August 2010, 13:29 Quote
Unless someone finds out what's wrong with folding on ATi GPUs, it's complete waste of power. Though there are other projects which make better use of ATi GPUs than F@H...
mi1ez 5th August 2010, 13:56 Quote
Could well sway my next purchase...
Pete J 5th August 2010, 13:58 Quote
What temperatures did the cards reach?
erratum1 5th August 2010, 14:17 Quote
Quote:
Although you can fold for a few hours at day, you'll really help the project (and therefore medical science) if you leave the folding client running 24\7.

As important as this research is, have we forgotten about global warming.

Hardly worth me using low energy light bulbs and not leaving stuff on standby if you nerds have your comps with multi gtx 480's going 24/7.
Teelzebub 5th August 2010, 14:27 Quote
There's a guy on another forum, The owner actually that folds and game's at the same with his GTX480 and reckons it reach's 90c +.

Personly I think the electric company benefits the most from people folding.
Phalanx 5th August 2010, 14:30 Quote
Quote:
Originally Posted by erratum1
As important as this research is, have we forgotten about global warming.

Hardly worth me using low energy light bulbs and not leaving stuff on standby if you nerds have your comps with multi gtx 480's going 24/7.

Of course that is if you believe that humans are aiding global warming other than what it would do even if everyone stopped polluting. :)
borandi 5th August 2010, 14:36 Quote
It's a shame this reviewer doesn't know about the differences in the architecture of the cards, or the maturity of each of the algorithms used in both the ATI and NVIDIA implementations.

NVIDIA: nice architecture, mature platform, optimised libraries
ATI: more flops per clock, poor documentation, unoptimised libraries

As a GPU programmer, almost everyone I come into contact with in this regard prefers the NVIDIA architecture, because it's more flexible and more mature. However, if you have algorithms that take advantage of the ATI high flops per clock (e.g. graphics, high workload per thread), and you want to go through the ATI documentation, then the ATI route is worth it.

For example, Milkyway@Home, Collatz@Home and DNETC@Home (all part of the BOINC cloud) all benefit heavily from the ATI architecture. In some cases, it's because the ATI clients produced for those projects are a lot more mature and optimised than the NVIDIA ones.

I've delved into computation protein folding before from the standpoint of someone who looks at the results, and the upshot of it is that you need so many results to come to one statistical conclusion, that if I wanted to donate computer time, I'd rather do it to something else. Sure, protein folding has noble goals, and there are a high throughput of results, but the results are still ultimately statistical and are a scratch compared to the practical biochem that goes on.
Chris P 5th August 2010, 14:38 Quote
Quote:
Originally Posted by erratum1
As important as this research is, have we forgotten about global warming.

Hardly worth me using low energy light bulbs and not leaving stuff on standby if you nerds have your comps with multi gtx 480's going 24/7.

One could also argue without a history of research, you wouldn't have low energy bulbs and devices to leave on standby :D

And how much energy is being used to find a solution for 'global warming' by supercomputers and data centres churning over computer models and collating data?

With regards folding, I keep getting error messages mid-gpu-fold (say 60-90% through) with my current system and I can't find the problem...
vampalan 5th August 2010, 15:07 Quote
A little off topic, but note that you can't mix and match brands of graphics card, as to use Nvida as a maths/physics processor and ATi as the graphics card. It's possible hardware side, just that the drivers have been written to lock that out, there have been hacks to make it work, but a hack is a hack, and last time I checked, it came with a virus of some sort.
Lizard 5th August 2010, 15:30 Quote
Quote:
Originally Posted by borandi
It's a shame this reviewer doesn't know about the differences in the architecture of the cards, or the maturity of each of the algorithms used in both the ATI and NVIDIA implementations.

You do realise that bit-tech didn't write any of the folding clients - we're merely reporting what results the different clients get on the various GPUs that can fold.

As for the differences between the various architectures/APIs, this is discussed in brief on the final page, but given that there's no solid info from Stanford about the huge performance difference between ATI and Nvidia GPUs when folding, it's impossible to point the finger specifically at what the problem is.
paisa666 5th August 2010, 15:44 Quote
Quote:
Originally Posted by borandi
It's a shame this reviewer doesn't know about the differences in the architecture of the cards, or the maturity of each of the algorithms used in both the ATI and NVIDIA implementations.

NVIDIA: nice architecture, mature platform, optimised libraries
ATI: more flops per clock, poor documentation, unoptimised libraries

As a GPU programmer, almost everyone I come into contact with in this regard prefers the NVIDIA architecture, because it's more flexible and more mature. However, if you have algorithms that take advantage of the ATI high flops per clock (e.g. graphics, high workload per thread), and you want to go through the ATI documentation, then the ATI route is worth it.

For example, Milkyway@Home, Collatz@Home and DNETC@Home (all part of the BOINC cloud) all benefit heavily from the ATI architecture. In some cases, it's because the ATI clients produced for those projects are a lot more mature and optimised than the NVIDIA ones.

I've delved into computation protein folding before from the standpoint of someone who looks at the results, and the upshot of it is that you need so many results to come to one statistical conclusion, that if I wanted to donate computer time, I'd rather do it to something else. Sure, protein folding has noble goals, and there are a high throughput of results, but the results are still ultimately statistical and are a scratch compared to the practical biochem that goes on.

What??... the article its about folding@home.. and ATi its currently not worth or it.. if it is good for something else then awesome.. but this is about folding: why the reviewer has to point out something else???
Xtrafresh 5th August 2010, 16:03 Quote
I seriously don't see the appeal in folding. It seems to me to be big willy-waving thing that allows you to put up a pious face and say you are doing it for science.

What i really don't get is that you guys wasted your valuable time testing four different ATI cards, while everybody knows that nVidia is the better folder by miles and miles. Just test the 5970 to prove the point that the 5xxx architecture hasn't changed this, and move on.

Sorry to be so sour on this, but in my mind there's still so much things you could be researching and writing articles on that i'm just a bit disappointed to see about a full week worth of investigation time (or am i just that far off?) be invested in such a predictable and marginal thing as this.
confusis 5th August 2010, 16:07 Quote
Stanford really need to get their butts into gear to code properly for ATI. nVidia as usual use bribery to get their cards sold so ATI with their lower budget gets left out again.

Higher FLOPS should equal higher PPD but the GPU client for ATI is a bodge job!
Quote:
Originally Posted by Xtrafresh
I seriously don't see the appeal in folding. It seems to me to be big willy-waving thing that allows you to put up a pious face and say you are doing it for science.
Yeah it's a bit of willy waving but I'd rather spend a little money a money a month towards finding a cure/cause for a lot of human suffering than waste it on more booze/geekery for me.
Xlog 5th August 2010, 16:59 Quote
Also, there is this, so you can fold while folding :D.
axN0xdhznhY
Deadpunkdave 5th August 2010, 17:05 Quote
Quote:
Originally Posted by Xtrafresh
I seriously don't see the appeal in folding. It seems to me to be big willy-waving thing that allows you to put up a pious face and say you are doing it for science.

What i really don't get is that you guys wasted your valuable time testing four different ATI cards, while everybody knows that nVidia is the better folder by miles and miles. Just test the 5970 to prove the point that the 5xxx architecture hasn't changed this, and move on.

Sorry to be so sour on this, but in my mind there's still so much things you could be researching and writing articles on that i'm just a bit disappointed to see about a full week worth of investigation time (or am i just that far off?) be invested in such a predictable and marginal thing as this.

I have zero interest in RC cars, but I don't mind the articles about them being on here, because I know there is a large section of the readership that do enjoy them. Likewise, a lot of people on here are into folding. Many do it because they have been personally affected by the diseases that F@H tries to investigate. I really don't see that as pious. Sometimes it looks like 'willy-waving' because there is purposefully a degree of competition, and the hardware required to fold seriously is expensive, but its friendly and for a good cause.
Muaadib 5th August 2010, 18:13 Quote
This "GTX460" is being recommended for everything...


Up next: Research proves if you put your GTX460 in 45 degree angle pointing north east for 3 minutes, you will have 70% more chance to get laid that day....
B1GBUD 5th August 2010, 18:37 Quote
I wouldn't say the GTX480 is "undesirable for gamers", quite the opposite. The stats also prove how good Fermi is for folding.

To those that argue regarding the balance of power consumption vs global warning, if you look at most cards today they all seem to consume a lot of power compaired to the days when they would clock down when not in 3D, (Aero is part to blame for this but also myself, I do like the extra graphical quirks) having a folding client running while you PC isn't doing much would surely benefit the world in the long term.
Zero82z 5th August 2010, 18:46 Quote
Quote:
As an added bonus, even though the GeForce GTX 295 draws more power than any other graphics card, it's actually more power efficient than the next fastest card, the GeForce GTX 480, thanks to the efficient way that folding scales across multiple GPUs.
Huh? Your points per watt graph clearly shows both the GTX 470 and GTX 480 doing better than the GTX 295.

Also, you should point out that although the GPU2 client will not download work units when used with 5000-series ATI cards (because it doesn't recognize them properly - this can be fixed by using the -forcegpu ati_r700 flag), the GPU3 client will still use the GPU2 cores when folding on them, because GPU3 is CUDA-exclusive at the moment.
Lizard 5th August 2010, 18:54 Quote
Quote:
Originally Posted by Zero82z
Huh? Your points per watt graph clearly shows both the GTX 470 and GTX 480 doing better than the GTX 295.

That's true, but the sentence is talking about performance, not performance per watt, and by that metric the GTX 480 is the next card down from the GTX 295.
Quote:
Originally Posted by Zero82z
Also, you should point out that although the GPU2 client will not download work units when used with 5000-series ATI cards (because it doesn't recognize them properly - this can be fixed by using the -forcegpu ati_r700 flag), the client will still use the GPU2 cores when folding on them, because GPU3 is CUDA-exclusive at the moment.

GPU3 isn't CUDA exclusive - it supports ATI cards too - all it does is allocate WUs from the ATI server so you don't have to muck around with forcegpu flags.
r3loaded 5th August 2010, 19:10 Quote
Quote:
Originally Posted by Muaadib
This "GTX460" is being recommended for everything...


Up next: Research proves if you put your GTX460 in 45 degree angle pointing north east for 3 minutes, you will have 70% more chance to get laid that day....

"My GTX 460 brings all the girls to the yard, and they're like, it's better than yours, damn right, it's better than yours!" :D

But yeah, Nvidia really have nailed it with the GTX 460 - brings all the benefits of Fermi's new architecture while fixing the original problems of high power consumption and heat output. And it retails at a very tasty price too.
thehippoz 5th August 2010, 20:06 Quote
I have to go to stanford twice a year.. I walked through the science wing once and it was all asians xD yeah they love nvidia over there
Gradius 5th August 2010, 20:30 Quote
This only shows the software is NOT optimized for ATI GPUs cards AT ALL!
Zero82z 5th August 2010, 21:24 Quote
Quote:
Originally Posted by Lizard
That's true, but the sentence is talking about performance, not performance per watt, and by that metric the GTX 480 is the next card down from the GTX 295.
The sentence specifically states that the GTX 295 is more power efficient than the GTX 480. Power efficiency is the same thing as performance per watt.
Quote:
Originally Posted by Lizard
GPU3 isn't CUDA exclusive - it supports ATI cards too - all it does is allocate WUs from the ATI server so you don't have to muck around with forcegpu flags.
The GPU3 client supports ATI cards. The GPU3 core does not. Using the GPU3 client (honestly, it's not really a separate client, it's just version 6.30 and up of the GPU client executable) with ATI cards will only result in the client downloading GPU2 work units, because the current GPU3 core (fahcore_15.exe) is exclusive to CUDA. Stanford is working on the OpenCL version of the OpenMM core, but it is not ready and neither are AMD and nVidia's implementations of OpenCL in their drivers.

For more information on this, you can read about it on the F@H forums or on Vijay's blog: http://folding.typepad.com/
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums