bit-tech.net

Rumour: Nvidia GT300 architecture revealed

Rumour: Nvidia GT300 architecture revealed

How do you follow a GPU architecture such as Nvidia's original G80? Possibly by moving to a completely new MIMD GPU architecture.

Although Nvidia hasn’t done much to the design of its GPU architecture recently - other than adding some more stream processors and renaming some of its older GPUs - there’s little doubt that the original GeForce 8-series architecture was groundbreaking stuff. How do you follow up something like that? Well, according to the rumour mill, Nvidia has similarly radical ideas in store for its upcoming GT300 architecture.

Bright Side of News claims to have harvested “information confirmed from multiple sources” about the part, which looks as though it could be set to take on any threat posed by Intel’s forthcoming Larrabee graphics processor. Unlike today’s traditional GPUs, which are based on a SIMD (single instruction, multiple data) architecture, the site reports that GT300 will rely on “MIMD-similar functions” where “all the units work in MPMD mode”.

MIMD stands for multiple-input, multiple-data, and it’s a technology often found in SMP systems and clusters. Meanwhile, MPMD stands for multiple-program, multiple data. An MIMD system such as this would enable you to run an independent program on each of the GPU’s parallel processors, rather than having the whole lot running the same program. Put simply, this could open up the possibilities of parallel computing on GPUs even further, particularly when it comes to GPGPU apps.

Computing expert Greg Pfister, who’s worked in parallel computing for 30 years, has a good blog about the differences between MIMD and SIMD architectures here, which is well worth a read if you want to find out more information. Pfister makes the case that a major difference between Intel’s Larrabee and an Nvidia GPU running CUDA is that the former will use a MIMD architecture, while the latter uses a SIMD architecture. “Pure graphics processing isn’t the end point of all of this,” says Pfister. He gives the example of game physics, saying “maybe my head just isn't build for SIMD; I don't understand how it can possibly work well [on SIMD]. But that may just be me.”

Pfister says there are pros and cons to both approaches. “For a given technology,” says Pfister, “SIMD always has the advantage in raw peak operations per second. After all, it mainly consists of as many adders, floating-point units, shaders, or what have you, as you can pack into a given area.” However, he adds that “engineers who have never programmed don’t understand why SIMD isn’t absolutely the cat’s pajamas.”

He points out that SIMD also has its problems. “There’s the problem of batching all those operations,” says Pfister. “If you really have only one ADD to do, on just two values, and you really have to do it before you do a batch (like, it’s testing for whether you should do the whole batch), then you’re slowed to the speed of one single unit. This is not good. Average speeds get really screwed up when you average with a zero. Also not good is the basic need to batch everything. My own experience in writing a ton of APL, a language where everything is a vector or matrix, is that a whole lot of APL code is written that is basically serial: One thing is done at a time.” As such, Pfister says that “Larrabee should have a big advantage in flexibility, and also familiarity. You can write code for it just like SMP code, in C++ or whatever your favorite language is.”

Bright Side of News points out that this could potentially put the GPU’s parallel processing units “almost on equal terms” with the “FPUs inside latest AMD and Intel CPUs.” In terms of numbers, the site claims that the top-end GT300 part will feature 16 groups that will each contain 32 parallel processing units, making for a total of 512. The side also claims that the GPU’s scratch cache will be “much more granular” which will enable a greater degree of “interactivity between the cores inside the cluster”.

No information on clock speeds has been revealed yet, but if this is true, it looks as though Nvidia’s forthcoming GT300 GPU will really offer something new to the GPU industry. Are you excited about the prospect of an MIMD- based GPU architecture with 512 parallel processing units, and could this help Nvidia to take on the threat from Intel’s Larrabee graphics chip? Let us know your thoughts in the forums.

12 Comments

Discuss in the forums Reply
wuyanxu 23rd April 2009, 10:41 Quote
interesting, but MIMD is a lot more complex to design, so unless they get this perfect, they'll suffer another Fx series.
Flibblebot 23rd April 2009, 11:16 Quote
Absolutely: parallelism is very hard to code for, especially if you come from a non-parallel background. Unless nVidia have gone out and hired a bunch of parallel knowledgeable coders, it may take some time for the drivers to become useable to any degree.

Also, how is the parallelism going to be handled by DirectX?


(Forum link missing in article, btw)
Turbotab 23rd April 2009, 11:27 Quote
It sounds exciting, big and expensive, but exciting. I only hope, that game developers do not waste the potential of these cards, by watering down their graphics engines to suit the consoles.
V3ctor 23rd April 2009, 12:21 Quote
Hope nVidia nails it this time... Don't get me wrong, I have a 4870, but nVidia can't just live on renamings and on bad choice architectures (GT200 is good, but too big)...

If nVidia fails this one, they will be in trouble... :s
Goty 23rd April 2009, 13:12 Quote
I've already got a CPU, I don't need another one!
Flibblebot 23rd April 2009, 13:31 Quote
But if the GPU is capable of managing "the world", including physics and anything else that affects that world, then the CPU could be freed for other tasks such as more realistic AI.
SuperNova 23rd April 2009, 15:02 Quote
Interesting approach but i wonder if nVidia isn't focusing a bit to much on Larrabee... It will take some time for everything to be programmed to use all this power (just look at when dual core cpus came). To focus on this now might be a bit optimistic and potentially take focus form other, say pure game-oriented parts. They usually work parallel with new architectures so i wonder how recent they charged their plans to match Larrabee (if thet did).
I bet AMD if focusing a lot on that shared frame buffer for their GPUs. If AMD solves that and make a great and cheap gamingcard they will stal market shares and gain more money. Making it easier to invest in future architectures when the software is closer to the market.

But it will be the software (development) of software that decides which architecture to go for, unfortunately its often a slow process. If AMD, nVidia and Intel had MIMD as a main focus there wouldn't be much of a problem though. because it would be equal for all and speed up the development.
Goty 23rd April 2009, 15:47 Quote
Quote:
Originally Posted by Flibblebot
But if the GPU is capable of managing "the world", including physics and anything else that affects that world, then the CPU could be freed for other tasks such as more realistic AI.

More extraneous processing means less power for raw graphics, though.
Redbeaver 23rd April 2009, 20:05 Quote
Quote:
Originally Posted by V3ctor
Hope nVidia nails it this time... Don't get me wrong, I have a 4870, but nVidia can't just live on renamings and on bad choice architectures (GT200 is good, but too big)...

If nVidia fails this one, they will be in trouble... :s

funny, the way i look at it, ATI...err...AMD is going to be in trouble if they dont come up with something groundbreaking...... im not talkin bout "gamers" community, but overall PC-users.... theyre all on nvidia.... thx to the marketing budget.....

since radeon9800 family i havent seen a single ATI pr...er... AMD... product that can take a clear shot at nvidia's lineup.

in my perspective, nvidia can afford tinkering around with this technology. and larrabee isnt even a threat. so here's for their best success in (finally) getting some fresh stuff out of the oven... im bored.
thehippoz 23rd April 2009, 21:47 Quote
interesting read on that blog.. either way nvidia is going to be in trouble.. I know if I had a choice to put everything under one heatsink- that's the way I'd go- I'm afraid nvidia is gonna get owned in the long run..

but I am glad to see this changed the r&d from milking to- we gotta get it in gear.. might see some good stuff next year
sheninat0r 23rd April 2009, 22:27 Quote
Quote:
Originally Posted by Redbeaver
funny, the way i look at it, ATI...err...AMD is going to be in trouble if they dont come up with something groundbreaking...... im not talkin bout "gamers" community, but overall PC-users.... theyre all on nvidia.... thx to the marketing budget.....

since radeon9800 family i havent seen a single ATI pr...er... AMD... product that can take a clear shot at nvidia's lineup.

The HD 4850 and 4870 are definitely very competitive with nVidia's stuff.
j_jay4 23rd April 2009, 23:16 Quote
Quote:
Originally Posted by Redbeaver


funny, the way i look at it, ATI...err...AMD is going to be in trouble if they dont come up with something groundbreaking...... im not talkin bout "gamers" community, but overall PC-users.... theyre all on nvidia.... thx to the marketing budget.....

since radeon9800 family i havent seen a single ATI pr...er... AMD... product that can take a clear shot at nvidia's lineup..

I'm pretty sure the 4870 X2 had the performance crown for a while too
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums