Feel the power - Nvidia has announced Tesla today, its upcoming GPGPU product.
Many believe that one of the main reasons why AMD bought graphics company ATI was the advancement of GPGPU or Stream Computing in the High-Performance Computing market.
Back at the GeForce 8800 launch, arch rival Nvidia talked about CUDA, the GPU Computing aspects of its G80 graphics processor, but didn't really give any timeframe on when we'd see it rolled out. We were also left wondering how the company would cater for the HPC market because it didn't give any indication as to whether it was going to release a dedicated GPU computing device.
Well, wonder no more - the company has announced its own contribution to the GPGPU world. Meet Tesla
For those who aren't entirely familiar with the concept of GPGPU, it's a lot easier than it sounds. The modern GPU is one of the two most complex things in the system, and some would consider it to be the most complex piece of silicon in a PC. Today's GPU is a massively parallel processing device and thus it can be many times quicker than a CPU in massively threaded tasks.
Three setups are scheduled for release - the C870, D870 and S870. The C870 is the Tesla card, which is PCI-Express and as the only "desktop" model, has no display output - it's literally a massively threaded computing processor. It's clocked at 575MHz core, and the 128 stream processing units are clocked at 1.35GHz (exactly like on the GeForce 8800 GTX), resulting in over 500 GigaFLOPS of compute power.
There's also 1.5GB
of GDDR3 memory clocked at 1600MHz on-board, just for kicks. Of course, it takes two PCIe power connectors and can suck up 170W of power at load - just like the regular G80. The big difference is the price - the C870 will set you back a whopping $1,499.00.
Two external systems have been developed as well, the D870 and S870. These take C870s and run them in parallel - two cards in the D870, which looks like a very mini tower, and a whopping four in the S870, passively cooled and built into a 1U rackmount chassis. Nvidia says performance scales pretty linearly on its multi-GPU Tesla solutions, because there is no SLI overhead - the four GPUs in the S870 are just controlled by four different threads on the CPU. Thus, you'll get over two TeraFLOPS out of the S870 at peak. The cost, in terms of power, for this is around 550W typical and a peak of almost 800W - not bad for a device that can deliver that level of compute power.
Each of these systems connect through an external PCIe host card, and are designed for high-end render farms and large-scale computations. Of course, that use is reflected in their prices - the D870 is $7,500 and the S870 is a massive $12,000.
Though the price is a little high for your average consumer, it's not targeted at them. It's more aimed at large corporations that need parallel computing power en-masse
and it'll get the same level of support as Nvidia's Quadro-based workstation graphics cards. Thus, we'll see applications certified and certified systems that will run mission critical applications without failure.
Have you got a thought on the releases? Tell us about them in our forums