bit-tech.net

Nvidia joins GPGPU race

Nvidia joins GPGPU race

Feel the power - Nvidia has announced Tesla today, its upcoming GPGPU product.

Many believe that one of the main reasons why AMD bought graphics company ATI was the advancement of GPGPU or Stream Computing in the High-Performance Computing market.

Back at the GeForce 8800 launch, arch rival Nvidia talked about CUDA, the GPU Computing aspects of its G80 graphics processor, but didn't really give any timeframe on when we'd see it rolled out. We were also left wondering how the company would cater for the HPC market because it didn't give any indication as to whether it was going to release a dedicated GPU computing device.

Well, wonder no more - the company has announced its own contribution to the GPGPU world. Meet Tesla.

For those who aren't entirely familiar with the concept of GPGPU, it's a lot easier than it sounds. The modern GPU is one of the two most complex things in the system, and some would consider it to be the most complex piece of silicon in a PC. Today's GPU is a massively parallel processing device and thus it can be many times quicker than a CPU in massively threaded tasks.

Three setups are scheduled for release - the C870, D870 and S870. The C870 is the Tesla card, which is PCI-Express and as the only "desktop" model, has no display output - it's literally a massively threaded computing processor. It's clocked at 575MHz core, and the 128 stream processing units are clocked at 1.35GHz (exactly like on the GeForce 8800 GTX), resulting in over 500 GigaFLOPS of compute power.

There's also 1.5GB of GDDR3 memory clocked at 1600MHz on-board, just for kicks. Of course, it takes two PCIe power connectors and can suck up 170W of power at load - just like the regular G80. The big difference is the price - the C870 will set you back a whopping $1,499.00.

Two external systems have been developed as well, the D870 and S870. These take C870s and run them in parallel - two cards in the D870, which looks like a very mini tower, and a whopping four in the S870, passively cooled and built into a 1U rackmount chassis. Nvidia says performance scales pretty linearly on its multi-GPU Tesla solutions, because there is no SLI overhead - the four GPUs in the S870 are just controlled by four different threads on the CPU. Thus, you'll get over two TeraFLOPS out of the S870 at peak. The cost, in terms of power, for this is around 550W typical and a peak of almost 800W - not bad for a device that can deliver that level of compute power.

Each of these systems connect through an external PCIe host card, and are designed for high-end render farms and large-scale computations. Of course, that use is reflected in their prices - the D870 is $7,500 and the S870 is a massive $12,000.

Though the price is a little high for your average consumer, it's not targeted at them. It's more aimed at large corporations that need parallel computing power en-masse and it'll get the same level of support as Nvidia's Quadro-based workstation graphics cards. Thus, we'll see applications certified and certified systems that will run mission critical applications without failure.

Have you got a thought on the releases? Tell us about them in our forums.

19 Comments

Discuss in the forums Reply
DougEdey 21st June 2007, 12:52 Quote
For anyone wanting a rather brief but useful insight into GPGPU stuff: http://leri.univ-reims.fr/~nocent/uot/Site/GPU%20Master%20Class.html

This was a lecture performed at my Uni during the last few months (I couldn't make it back) and the lecture information is rather useful.
David_Fitzy 21st June 2007, 12:53 Quote
Yay tons of near pointless Hz, and nVidia sucking even more Watts, they should start investing in the national grid
Phil Rhodes 21st June 2007, 13:39 Quote
Yes, marvellous, lovely, smashing, super.

Now what does it actually do?

Phil
TomH 21st June 2007, 13:51 Quote
Quote:
Originally Posted by Phil Rhodes
Yes, marvellous, lovely, smashing, super.

Now what does it actually do?

Phil
Everything! Well, almost. You couldn't replace your CPU with it, but you could offload your insane mathematical calculations onto it, and it'll run them faster than a dual Core 2 Quad system.

Did you even RTFA? How about checking some of the videos on Nvidia's Tesla page page

Now I need to try and find an excuse to get the boss to buy one... Somehow
C-Sniper 21st June 2007, 14:23 Quote
so basically in the future we will have a separate motherboard for our graphics processing which will have unparalleled memory and require its own 2000W power supply.
DXR_13KE 21st June 2007, 14:45 Quote
pointless, expensive and very non energy efficient.
mmorgue 21st June 2007, 14:51 Quote
The numbers sound impressive, but like everyone here has said, "What's the point? What do *I* get out of it?"

At the moment I can't see how this benefits the average game enthusiast -- the costs and requirements are almost prohibitively high. Again i do say almost as some people are quite mad.. ;)

I suppose it's fantastic for labs, researchers, engineers and CAd work. But for me, as a gamer -- it has potential, but I really don't want my game machine sucking up 3000+ watts in two separate boxes!

/sigh, it was so much easier with my C64...
./^\.Ace./^\. 21st June 2007, 15:01 Quote
OK I know what a GPU is but what is a GPGPU :? Is it like two GPU's or is it like a dual core GPU :? Also I think Nvidia should be using there time to make a GPU that doesn't use so much power :( What good is a fast rig if you can't afford the energy bill Also with the speeds that Nvidia is now getting out of there GPU's does any one think they will eventualy make a leep to Processors :?
chrisb2e9 21st June 2007, 17:45 Quote
For anyone who said something to the effect of "what does this do for me?" the answer is nothing. Quote from the article:
"It's more aimed at large corporations that need parallel computing power en-masse and it'll get the same level of support as Nvidia's Quadro-based workstation graphics cards. Thus, we'll see applications certified and certified systems that will run mission critical applications without failure. "

Down the road this may be applied to running our games but in the mean time im sure it will end up somewhere inside the next super chess computer.
TGImages 21st June 2007, 17:55 Quote
This Tesla is even more expensive... to the tune of $100,000

http://www.teslamotors.com/

But back to the topic. A $12,000 video card? Even for the high end CAD stations I buy and setup at work we are quite happy with the $1000 FX3450 series. What application can really justify this card???
chrisb2e9 21st June 2007, 18:01 Quote
I would guess applications like Lord of the Rings where they had to render massive scenes.
Mister_Tad 21st June 2007, 18:15 Quote
Quote:
Originally Posted by TGImages
This Tesla is even more expensive... to the tune of $100,000

http://www.teslamotors.com/

But back to the topic. A $12,000 video card? Even for the high end CAD stations I buy and setup at work we are quite happy with the $1000 FX3450 series. What application can really justify this card???

Its more like a CPU than a video card, well, more like many many cpus.
Quote:
Originally Posted by DXR_13KE
pointless, expensive and very non energy efficient.

did yopu not read the write-up?
2 Teraflops of general purpose processing power for 800w peak. It would require *many times* more than that to run a rendering or number crunching farm of similar capabilities, and would cost a lot more to boot.

It isnt pointless if you need it. You're not supposed to pop one of these in your home system, desktop apps arent programmed to take advantage of things like this. This is exclusively for high end number crunching.
Bluephoenix 21st June 2007, 19:01 Quote
the basic onme shown may wind up in some CAD analysis workstations if the FEA (finite element analysis) programs start to take advantage of CUDA, but othewise it will simply be for extremely specialized applications (like F@H)

for those calling this a video card; it isn't at all. It may be based on GPU technology, but is used for entirely different things.
Fr4nk 21st June 2007, 19:22 Quote
£10 says you can flash a 8800gtx to a tesla bios and use it :p. Otherwise some very interesting and powerful computing power.
Tim S 21st June 2007, 19:23 Quote
FWIW, you can use an 8800 GTX with CUDA... this is just a bit more specialised and will be certified like the Quadros are.
completemadness 21st June 2007, 21:44 Quote
Quote:
Originally Posted by ./^\.Ace./^\.
OK I know what a GPU is but what is a GPGPU
General Purpose Graphics Proccesing Unit

Basically, you can use the GPU as more of a CPU
A GPU is extremely powerful (as shown by running F@H on GPU) however, it can only perform a very select range of functions, so it will never really replace a CPU (which will do almost anything, including GPU functions, but much more slowly)

The idea of this Box is not to be another GFX card, but more like a much more specialised CPU

currently, it can take hours to render mere frames for films or something along those lines
Plug one of these in and you could eventually start working in real time, that is going to be very appealing to companies making films and such

This isn't useful for normal people, but in the business world these boxes are very valuable
ou7blaze 23rd June 2007, 13:52 Quote
This could probably mean better CGI in movies in the future :D
outlawaol 11th August 2007, 20:01 Quote
Quote:
Originally Posted by ou7blaze
This could probably mean better CGI in movies in the future :D

No, it would just mean faster rendered CGI in the future.

They render stuff at insane resolutions to begin with, this would just make it faster.

From what I've heard, the 8xxx series card is rendering realtime what pixar was prerendering in Toy story. Granted, thats at full frame or HD levels, not theater quality.

These could be the steps that push general computing to super computing in the future, for a normal system I mean.

:)
chrisb2e9 11th August 2007, 22:00 Quote
Quote:
Originally Posted by outlawaol
No, it would just mean faster rendered CGI in the future.

They render stuff at insane resolutions to begin with, this would just make it faster.


:)

so its not 640x480?
:D
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums