bit-tech.net

Intel has answers to CUDA

Intel has answers to CUDA

Ct is a new parallel programing model designed to make scaling programs to many threads in Tera-Scale projects incredibly easy.

Day Zero at the Intel Developer Forum in Shanghai was largely uneventful, but there was at least one interesting tidbit to be found amongst all of the talk about Intel’s research efforts in China.

Our old friend Larrabee popped up during the ‘Techsperts’ session, when one of Intel’s researchers talked about Intel’s parallel programming environment that has been coined ‘Ct’.

Ct is a flexible programming model that allows developers to just program in a way that’s familiar to them – that sounds fairly generic, but where it gets interesting is that it’s specifically designed to scale across Tera-Scale architectures for maximum performance with no extra threading effort on the developer’s part.

Think of it as a similar tool to Nvidia’s CUDA HPC programming model, which enables developers to use the GeForce 8- and GeForce 9-series’ compute power for massively parallel tasks that aren’t graphics.

Everything is shaping up for Intel to assault the HPC market first with Larrabee, but don’t think Intel is going to stop just there. Another researcher all but confirmed that Larrabee will be released as a discrete GPU—or graphics card if you will—basically setting the stage for a fight for supremacy between AMD, Nvidia and Intel in the graphics market.

I believe Larrabee and Tera-Scale have a lot more in common than you would first think – Tera-Scale is a research project, right? Well, what do you think Larrabee is? Intel itself said during its pre-IDF briefing that Larrabee scales to teraFLOPS of compute power – that seems pretty Tera-Scale to me…

This is part of the reason why I find the whole Larrabee discussion an interesting one – exactly how much of Intel’s Tera-Scale research will make it into Larrabee in its first incarnation… and what’s left to come in future versions of this architecture?

I hoped to get some of these questions answered yesterday, as I was meant to have a one-to-one interview with Justin Rattner, Chief Technology Officer at Intel. However, Rattner has unfortunately been taken ill and there’s a chance he might not make his keynote tomorrow. The interview was rescheduled with Andrew Chien, a member of Rattner’s research team, but he was presenting on stage during my allotted interview time. I’ve been told that it’ll be rescheduled, hopefully for sometime today.

Are you excited at the prospects of Intel entering the discrete graphics market? Let us know your thoughts in the forums.

6 Comments

Discuss in the forums Reply
Mentai 2nd April 2008, 04:21 Quote
If this is as amazing as they say it is, and they release a discrete GPU leaps and bounds ahead of the competition (which I see as possible considering the lackluster 9800GTX), then I'm very excited. Always good to have more competition and have nvidia play catch up again.

Hopefully the driver support will be up to scratch as well.
Nikumba 2nd April 2008, 08:25 Quote
I cant remeber where I read it, but will have a look, but there was an article on how Intel were looking at the discrete GPU market, and could probally just stick a C2D on a card, with some GDDR and relase it.

If that was the case, would be cheaper than ATI/nVidia I am sure and better performing
Xtrafresh 2nd April 2008, 08:51 Quote
finally something interesting. I'd really like a third company to take a stab at making GPUs. Not that we dont have enough good GPUs in the grey zone, but it's always nice to see more :)
Laitainion 2nd April 2008, 09:16 Quote
Quote:
Originally Posted by Nikumba
I cant remeber where I read it, but will have a look, but there was an article on how Intel were looking at the discrete GPU market, and could probally just stick a C2D on a card, with some GDDR and relase it.

If that was the case, would be cheaper than ATI/nVidia I am sure and better performing

Hate to burst your bubble, but a C2D (no matter how you clock it) could *ever* be a faster graphics processor than anything in current generation nVidia or ATi have out. It simply isn't designed to to be a massively parallel, floating point monster of doom. It would also need a seperate memory controller for the GDDR3, which would have to talk to the C2D over an FSB crippling the bandwidth and memory performance.
chicorasia 2nd April 2008, 14:53 Quote
In the meantime, nVidia has quietly released CUDA for OSX.
Anakha 2nd April 2008, 18:09 Quote
Y'Know, I'm amazed no-one's made this interesting (And more than a little ironic) leap yet.

CUDA enables developers to calculate massively parallel tasks.
Ray-Tracing is a massively parallel task.

Ergo, CUDA is perfect for raytracing.

I mean, seriously. Ray-Tracing, at it's heart, is a very simple equation that needs to be done many, many times (2 or 3 times per pixel). With 128 cores working on it at a high rate of speed (Say, a single GeForce 8800 GT), it would still be miles faster (And cheaper) than using C2D's (Or, as was demonstrated by the OpenRT guys, a massive network of 16 C2Ds).
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums