bit-tech.net

We've just witnessed the last days of large, single chip GPUs

Posted on 31st Mar 2010 at 11:15 by Richard Swinburne with 33 comments

Richard Swinburne
Just as with the 65nm manufacturing process used for GT200, I'm certain Nvidia overestimated what the 40nm node would offer when it first designed Fermi and that this miscalculation has played a huge part in the fact that the GeForce GTX 480 is so hot and uses so much power.

We know each major architecture change for GPU development takes at least a few years to mash out, so fabless companies such as Nvidia need to guess where fabrication partners - TSMC in this case - will be.

As it stands, TSMC has had more than a rough year with its 40nm node and there's been considerable stress for both ATI and Nvidia - however, to ATIs advantage, it started on 40nm with the Radeon HD 4770. It's clearly not forgotten the lesson came at the expense of the HD 2900 XT, which first arrived on a massive 80nm die, before being respun into TSMCs then upcoming 55nm node at a more digestible price.

Luckily for Nvidia the GTX 480 isn't quite up to the HD 2900 XT par of failures; at least it's faster than the previous generation and, negating the lateness and practical engineering issues, the die size and power use are truly massive.

The thing is, TSMC hasn't yet demonstrated how commercially viable its next fabrication node (likely 28nm) is, so while we fully expect Nvidia to 'pull an ATI HD 3000 series' in six months time and re-do Fermi with a smaller process, resulting in a much more power efficient GPU, TSMC's troubles - and the fact Nvidia is desperate for a new process - means Nvidia is likely looking to Global Foundries, the manufacturing firm spun off from AMD last year.

The heat and power consumption of the GTX 470 and 480 mean its highly unlikely we'll see a dual GPU product any time soon and, as much as the nay-sayers claim we cannot compare the "dual GPU" Radeon HD 5970 to the "single GPU" GTX 480, the fact is that these are all products competing for the titles of "fastest graphics card" - and that's a title which isn't leaving AMD any time soon.

We've just witnessed the last days of large, single chip GPUs Nvidia over estimated the capacity of 40nm

While there are many reasons to be pessimistic about Fermi, one lesson Nvidia has learnt is to design Fermi with more modularity in mind, so we won’t have a repeat of the GT200 series, where Nvidia struggled to produce any derivatives for the all-important mainstream market.

We will be seeing more Fermi-based derivatives in the coming months, which Nvidia absolutely needs to nail because ATI has already successfully launched a complete top-to-bottom DirectX 11 range and after a 9 months of cold turkey, 2010 will see which Nvidia partners can take the strain and come out dancing.

For a few, they might actually be better off for less competition. BFG has already left the European market along with many other smaller partners, and EVGA is set for a resurgence. Things could get interesting.

Looking to the future, I wonder if this will be the very last big GPU we ever see. After two big and hot GPUs, will Nvidia abandon this method of design in favour of the ATI-esque (or should we say, Voodoo-esque) route of multi-chip graphics card design? Dare I suggest, does it even care?

Are Nvidia’s ambitions reallyto now concentrate on CUDA applications in the HPC market where big money is to be made? It’s certainly a huge growth area compared to the relatively mature PC gaming market.

Like so many of us, I never want to see PC gaming die, but in my opinion the days of multi-billion transistor single chip graphics cards are practically over.

33 Comments

Discuss in the forums Reply
Fizzban 31st March 2010, 11:32 Quote
Cooler running, less power hungry GPU's are the way forward. At least as far as the mainstream consumer is concerned. The less power we use the more money we save. Unless Nvidia can make a massive jump in performance, their cards are just pointless in most day to day rigs. Unless you fold or used cuda optimized programs its just a waste of money x2.

Makes me chuckle..only Nvidia could make a 40nm based card more power hungry and hot than its predecessor.
_Metal_Guitar_ 31st March 2010, 11:47 Quote
"Like so many of us, I never want to see PC gaming die, but in my opinion the days of multi-billion transistor single chip graphics cards are practically over"

What has a change to multi chip GPUs got to do with PC gaming dying? If all they make are multi chip GPUs, wouldn't the support for them just get better?
Cyberpower-UK 31st March 2010, 11:55 Quote
As Crossfire and SLi have matured the multi GPU argument that ATi put forward with the launch of the 3870x2 is making more sense. High end cards rarely come close to 100% scaling due to CPU and memory limitations, especially in non-overclocked systems, but lower end cards in CF and SLi systems with OC'd CPU can often challenge the 'top end single GPU' and cost less, for example a pair of 5770s can compete with 5870 and three nips at the heels of the 5970. Cards like the 4870x2 and GTX295 have forced games developers to ensure their products can make good use of multi GPU systems which is beginning to erode out dated prejudices against SLI and CF in the same way as the prevalence of multi-threading in games slowly eroded the fast dual core vs quad core argument.
Bindibadgi 31st March 2010, 11:55 Quote
Quote:
Originally Posted by _Metal_Guitar_
"Like so many of us, I never want to see PC gaming die, but in my opinion the days of multi-billion transistor single chip graphics cards are practically over"

What has a change to multi chip GPUs got to do with PC gaming dying? If all they make are multi chip GPUs, wouldn't the support for them just get better?

More money has to be put into driver development and more money has to be put into graphics card design. With the cards stacked against PC gaming already in some respects, and each generation of graphics cards having less and less of a performance jump from the last - are we already hitting a wall?
Tyrmot 31st March 2010, 11:57 Quote
Pretty sure I remember people saying the same thing about the big G80 core when that came out too...
Xir 31st March 2010, 12:05 Quote
Haven't the CPU's come back from putting two chips in one package and are producing large, single chips again?
And why is it Global Foundries to NVidia's rescue? (ATI'd rejoice I guess, GloFo is still full of "old" AMD people)
What Node is GloFo at? 45nm? (Opterons mostly I guess)
Even intel is just at 32nm for something as complex as a processor (which a GPU is...if not more complex)
Blademrk 31st March 2010, 13:29 Quote
Quote:
BFG has already left the European market
Really? that's a shame :( they were always the cards I looked for first.
wuyanxu 31st March 2010, 13:37 Quote
'tis a shame, multi-GPU is way too dependent on drivers.

how about specifically design a bus for multi-GPU? i like 4870x2's side-bus (??) and i think that's the way forward for higher/better performance.
fingerbob69 31st March 2010, 13:39 Quote
Where is the impetus to keep designing faster and faster gpu's? Unless and until developers bring us games and applications that stretch and even surpass current gpu capabilities all we will see, for quite awhile are Fermi-small steps in better graphics performance. There needs to be at least a dozen, if not more, games like Crysis; that make gpu's bleed so that even a basic £100 card has to give a 5780 level of quality to keep games playable.

It is unfortunate that we are unlikely to see any of this until after a new generation of consoles comes out and raises the floor under games graphics. PC gaming isn't dying, it just no longer leads.
uz1_l0v3r 31st March 2010, 14:49 Quote
I don't understand how the disappointment of the Fermi core equates to the death of single chip GPUs. Dual core GPUs are total overkill in today's gaming market, the average gamer simply does not need one. Why would anyone in their right mind spend £4-500 on a dual core graphics card, when single core cards are perfectly adequate? I'm playing on a year old gtx 275 and still cranking every game to the max.
l3v1ck 31st March 2010, 16:15 Quote
Quote:
We've just witnessed the last days of large, single chip GPUs
I hope not. I'd much rather have a single GPU than need to rely on drivers to get good SLI/CF performance.
rollo 31st March 2010, 16:35 Quote
problem is if nvidia focuses on cuda as most expect ( which makes 50-60% of its profits acording to reports)

then ATI will have no competition

and without it no real will to push the graphics boundry further.

yes i prefer a single gpu always will but i think like the blob person we have seen the last of them
l3v1ck 31st March 2010, 17:37 Quote
Question: could Microsoft build general multi-GPU support into the next API (Direct X 12). That would be much better than needing specific games profiles in the driver.
D-Cyph3r 31st March 2010, 17:39 Quote
Quote:
Originally Posted by l3v1ck
I hope not. I'd much rather have a single GPU than need to rely on drivers to get good SLI/CF performance.

Or just use a single 5870 and get 85% of the performance....



Anyways no, Nvidia wont learn from this because they still think Fermi is the best thing since slice bread. Jen-Hsun Huang is borderline delusional in his own brand fanboyism, hell he still thinks nVidia "make the best chipsets in the world"....
technogiant 31st March 2010, 17:42 Quote
Quote:
Originally Posted by rollo
problem is if nvidia focuses on cuda as most expect ( which makes 50-60% of its profits acording to reports)

then ATI will have no competition

and without it no real will to push the graphics boundry further.

yes i prefer a single gpu always will but i think like the blob person we have seen the last of them

I think there is a change of focus coming, perhaps Nvidia is going to concentrate more on the hpc market and this will drive development and a derivative of the hpc product will be used for gaming....very much as fermi is.

Tbh perhaps thats the way it should be...it has always struck me as a little frivolous that "gaming" should be a major driving force in computer hardware development.

Infact it may even be advantageous as regardless of the ebb and flow of demand for pc gaming hardware there will always be demand from the hpc sector...so provided there is sufficient demand to make a gaming derivative of a hpc product then hardware pc gaming hardware will continue to develop.
Sloth 31st March 2010, 19:31 Quote
Quote:
Originally Posted by l3v1ck
Question: could Microsoft build general multi-GPU support into the next API (Direct X 12). That would be much better than needing specific games profiles in the driver.
That would certainly be nice. When/if multi GPU moves forward and becomes more popular it would seem that there would be more support for it and therefore less issues. Look at 64 bit operating systems, used to cause some pretty big issues running one but now that modern PCs are hitting RAM capacity limitations there is a huge push to use a 64 bit OS and they are quite standard now so people make new products to work with them. Much the same way, I assume game developers would start making games better suited to use dual GPUs, API's would be changed, and drivers could be developed with the sole intent of dual-GPU cards.
azrael- 31st March 2010, 21:07 Quote
I'm pretty sure Bindi, and others, is onto something. The days of huge monolithic GPUs are numbered. The graphics card industry is about to learn the same lesson the processor industry learned a few years back (mostly Intel with the P4): You can only go so far with (huge) single-core designs.

The future of GPUs clearly lies in smaller and more efficient multi-core designs, just as it does for CPUs. Yes, it'll take a paradigm-shift and the learning curve for optimizing code for multi-core solutions might be a bit steep, but it's clearly the way ahead.
fingerbob69 31st March 2010, 21:37 Quote
Seriously folks ...you gotta ask; why?

Graphic cards purely for gaming as their main raison detré? Games are (with the honourable exception of Crysis) eaten alive by by most mid+ cards of the last two years. (Think 4890/275 or higher) . It is game developersthat have to develop games that DRIVE users to want up grade...Ati/Nvidia are on a lost cause if they think people will continually up grade just to have the latest card if their exsisting card remains more than adequate. Nvidia have bought games to show off physX. Wasted effort!. Ati (and Nvidia) should be paying them to make games only properly playable on the best of the last gen cards so people buy the next gen and the next gen is worth developing.
tad2008 31st March 2010, 21:49 Quote
Quote:
Originally Posted by Sloth
Question: could Microsoft build general multi-GPU support into the next API (Direct X 12). That would be much better than needing specific games profiles in the driver.

Some of the issues for games and drivers is down to the way the drivers are coded, the rest lays in the hand of the developers and the code they write. Far too many people are too keen to point to bad drivers when a lot of the time the problem lays within the software itself and is why on the PC platform we have the need for patches, something the consoles don't get because they spend more time testing for problems.

It is a misconception that the sheer variety of hardware on a pc platform is to blame, but in essence this simply comes down to drivers and the software that uses it. For a simplified understanding consider Adobe's Flash and that it is capable of running on any PC regardless of hardware.
Quote:
Originally Posted by azrael
I'm pretty sure Bindi, and others, is onto something. The days of huge monolithic GPUs are numbered. The graphics card industry is about to learn the same lesson the processor industry learned a few years back (mostly Intel with the P4): You can only go so far with (huge) single-core designs.

The future of GPUs clearly lies in smaller and more efficient multi-core designs, just as it does for CPUs. Yes, it'll take a paradigm-shift and the learning curve for optimizing code for multi-core solutions might be a bit steep, but it's clearly the way ahead.

I do agree that multi-core GPU's are going to be the way forward, though ATI have shown that there is still some life left in in single core GPU's for a little while longer yet.
metarinka 31st March 2010, 22:59 Quote
by multi gpu do we mean multiple discrete GPU's such as any x2 type of card? or multiple disrete cores on a single die?

graphics processing is already an extremely parallelizable task. That's why GPU's have hundreds of stream processors and shader units. Those channels already act as a "multi GPU" solution with shared resources like memory links and the like. hence any modern GPU already is the equivalent of a 100+core processor (mind you with a specific instruction sets and shared memory)

correct me if I'm wrong, but what is the benefit of going to multiple discrete dies instead of using one die with twice the transistors? unless we are talking about heat and yields?
dec 1st April 2010, 01:11 Quote
Quote:
Originally Posted by metarinka
by multi gpu do we mean multiple discrete GPU's such as any x2 type of card? or multiple disrete cores on a single die?

graphics processing is already an extremely parallelizable task. That's why GPU's have hundreds of stream processors and shader units. Those channels already act as a "multi GPU" solution with shared resources like memory links and the like. hence any modern GPU already is the equivalent of a 100+core processor (mind you with a specific instruction sets and shared memory)

correct me if I'm wrong, but what is the benefit of going to multiple discrete dies instead of using one die with twice the transistors? unless we are talking about heat and yields?

multigpu = 4870x2, 3870x2, etc.

multiple discrete dies lets pretty much everyone in the manufacturing process save a bit of time and money as its easier doing it once and slapping two of them onto a PCB instead of doing 2 totally different GPU's. Although its possible to just do this. Use Cypress of example.

Currently its this:
5970 = 2 5870's on the same PCB
5870 = 20 processor clusters (right name) enabled
5850 = 18 clusters enabled.
5830 = 16 or something.

and so on. And theyre all the same GPU with different parts disabled (except 5970). If they wanted to they could call a 5870 a 5970, a 5850 a 5870, and so on and achieve a similar result to what you were talking about. But the reason why they dont take the transistor count and shader count from the 5970 and force it into a 5870 die is because the stupid thing would pull a GTX480/470. (cook itself and blackout the whole on new york).

It seems like ATI waits for a die shrink before doing that. Since the 5870 matches the 4870x2 for shader count but runs cooler and draws less power.

Now on to the topic of the blog.

Big GPU's wont be going anywhere for a while. As long as there are people, stuff like Fermi will keep happening simply because everyone will be like "i hope that new process can keep this thing from flopping" when the process is a few years off to begin with. Still there will be big single GPU's to power our playstations and xbox 360's of the future. However a demand for cool and light-bill friendly GPU's will become necessary especially when flash/internet browsing becomes gpu accelerated.

Fermi + 28nm > 5970?
Elton 1st April 2010, 02:38 Quote
I think the Large GPU already died with the 8800Ultra and even moreso with the GTX280.

Not only are the processes getting increasingly smaller, but the industry is stagnating in terms of graphical prowress in games. Of course many I think already foresaw that the GPU size could only be as big until there was no more room, and in this case there isn't.

Think of the CPUs now, 2 cores on 1 die..
iwod 1st April 2010, 04:06 Quote
Fermi has other problems as well, between now and 28nm a few Fermi respin will fix yield and Power Heat issues, or at least improvement will be made. Fermi has to move to 256bit memory controller. Because most games dont cope well with non Standard Memory size config. 384bit will only get higher Bandwidth but wasted memory. ( Unless Nvidia can push the industry to support these memory size. ) While it would be great to see 512bit GDDR5 controller. There seems to be yield and die space issues with it. 256bit GDDR5 5GHz would not be enough for Fermi, and 7Ghz or higer are not widely available yet. And if Nvidia could made Fermi running higher frequency Fermi has defaintely fit the bandwidth wall..
Xir 1st April 2010, 11:22 Quote
All current processors are multi core...but on one die ;-)
wuyanxu 1st April 2010, 11:54 Quote
people got to stop comparing multi-GPU to multi-core CPU. GPU are multi-core. multi-CPU server platforms such as a 4 processor cluster is comparable to multi-GPU.

multi-core is where there is a number of cores share the same cache, with the same memory controller.

multi-processor is where there are a number of memory controllers, each have their own memory, nothing is shared except for the IO of the system.

due to shared nature of multi-core, current single GPU's can have very effecient scheduling and data can be exchanged on the very fast cache. but with multi-core, data must travel through a form of bus connecting the cores, thus creating bottleneck.
rollo 1st April 2010, 12:21 Quote
This assumes a console war continues, Sony and Microsoft have both not commuted to the next gen console

natal and the playstation eye I think it's called are there next big ideas

Sony said they have a 10 year product life cycle does that mean it's 10 years before ps4. Nvidia and Sony have a decent relationship so you would assume they would stick together

graphics can't go that much higher before they are life like there's only so much detail you can add before the game hits reality

Crysis is pretty close to it already. Chracters can already show emotion

3d is the next big thing but i don't think 3d is for pc Market. Most people are still on. 21 inch screen if not below. How much further can graphics truly be pushed. Wipeout HD is still the best looking game out there with god of war 3 pretty close up

both running on a 4-5 year old gpu.

No game bar crysis really requires the 5850 and above unless you go into the crazy resolutions which a bit tech review showed very few use

most people are still at the 1680 mark or even 1280x1024 Tilll everyone is at 1080p/i I dout graphics hardware will ever be pushed
dec 1st April 2010, 12:54 Quote
Quote:
Originally Posted by rollo
This assumes a console war continues, Sony and Microsoft have both not commuted to the next gen console

natal and the playstation eye I think it's called are there next big ideas

Sony said they have a 10 year product life cycle does that mean it's 10 years before ps4. Nvidia and Sony have a decent relationship so you would assume they would stick together

graphics can't go that much higher before they are life like there's only so much detail you can add before the game hits reality

Crysis is pretty close to it already. Chracters can already show emotion

3d is the next big thing but i don't think 3d is for pc Market. Most people are still on. 21 inch screen if not below. How much further can graphics truly be pushed. Wipeout HD is still the best looking game out there with god of war 3 pretty close up

both running on a 4-5 year old gpu.

No game bar crysis really requires the 5850 and above unless you go into the crazy resolutions which a bit tech review showed very few use

most people are still at the 1680 mark or even 1280x1024 Tilll everyone is at 1080p/i I dout graphics hardware will ever be pushed

The PS2 wasnt 10 years before the PS3.

+1 To natal.

IF OLED's ever get around to being sold like LCD's it just may be possible to have a whole wall as your monitor. That oughta keep GPU's busy. Who needs Eyefinity? I got a wall!
dogknees 2nd April 2010, 05:24 Quote
Many of you seem to be saying that games have reached their ultimate potential. That there is no possible room for improvement and no new paradigms to be explored.

We haven't even scratched the surface of the possibilities! This is some of what I see in the future of gaming.

Games physics is still incredibly immature. simulation of liquids and gasses and their interaction with the rest of the world is immature or non-existent. Where it exists, the methods are not "physics" based, but done with a few formulae that captures the general behaviour, but without much of the subtly of the real world. Particle systems are still using fairly coarse approximations to model gases and the simulation of chemical behaviours is only just appearing. Think about simulating game physics at the atomic level...

Modelling is still at the level feature films were a decade or more ago. They are using models with billions of polygons. Gaming models are a joke in comparison. The use of tessellation in the new generation chips is a great step, but only one more step along a very long path. The sort of model detail seen in Avatar is where gaming will hopefully be in the medium term.

Game AI is also in it's infancy, even compared to the current state of the art, but research in AI in general is accelerating at least as fast as the rest of the information industry, and game designers will keep racing to use whatever the researchers come up with. We will see smarter, more subtle opponents and allies. They'll have more detailed memories and intelligent behaviours, both individually and cooperatively and the ability to come up with novel solutions to problems.

I can picture something like a simple field in spring in a future game. You'd be seeing literally billions of individually animated blades of grass, leaves, flowers and so on. All of them would move independently in the breeze as they do in the real world. There would be dew on the grass, and if you looked closely, all of it would be refractive. The soil the plants are growing in would consist of separate grains of sand, bits of organic matter and small stones. Your footprints in the soil and the crushed bits of grass would be the same, with the blades of grass slowly straightening up after you pass. The dew would be making your trouser legs damp, causing them to cling to your legs a bit as you walked. The field would also be populated with the variety of insects and other small animals you would see in a real field. These would all act like their real cousins, eating the grass or each other.

That's just what I came up with in 5 minutes, without even considering actors, their behaviour and their interactions with the environment, each other and the player(s). I'm sure the designers and other creative types that build our games could add far more.

We are at the dawn of gaming and virtual environments, compare Crysis with Wolfenstein and extrapolate a couple of decades with the exponential increases we've seen so far in all aspects of gaming. It's also likely that entirely new kinds of game will be developed given the possibilities that vast increases in processing power will available. After all, before Wolfenstein, there were no essentially FPS's as we know them today. The power of the '386 generation made something entirely new possible.

1080 HD is the standard right now, but future standards are already being developed in the labs with 5-10 times the resolution we have now. Think about a wrap around screen about twice the size of a 30", but with resolution like a photo, maybe a 0.025 mm dot pitch or less.

We'll also need two frames for whatever type of 3D is used! That was the future only a couple of years ago and now lots of new products are in 3D, media and hardware. It'll be standard in another couple.

Big hi-res screens/3D displays, models with billions or trillions of polygons, accurate simulation of reflection, refraction, diffraction, diffusion, radiosity, and scattering of light will need enormous amounts of processing power from future generations of GPUs. Other advances will need physics processors perhaps AI processors and more.

Future developments in fabrication and design will allow more gates to be built on the same area of a chip. Moving to new materials and technologies will provide more grunt still.

And, still on the far horizon, but definitely out there, is quantum computing of various sorts. Some of the concepts are pretty esoteric, but there are others, based on single electron gates and circuits(which use single electrons to represent a bit) and other physical phenomena like spin, that are much closer to being realised.

Will this ultimate processor be a single chip/object? Perhaps not, and certainly there will be generations where multiple chips are used, probably LOTS of chips. But, I think there will be many single chip solutions in our future.

One reason is that it's always going to be faster to send a signal across a single chip that off one, across a wire/PCB trace/optical fibre, and back into another chip. As speeds increase, distance becomes a serious issue. The smaller your processor, the faster it can work.

Scientists and engineers have been developing the idea of truly 3D processor/chip structures where you stack up many layers of circuitry vertically for years, and some chips now used multi-level structures where several transistors are stacked on top of one another. Potentially there could be as many layers as there are gates across the width of a chip. So, tens of thousands of layers might be possible. I'd call this a single GPU.

We will never stop inventing and developing newer smarter technology and applying it to entertainment. Gaming is, and I believe will remain, one of the primary motivating factors of this progress. Maybe just behind sex!

People love to play!
azrael- 2nd April 2010, 09:16 Quote
Quote:
Originally Posted by dogknees
Many of you seem to be saying that games have reached their ultimate potential. That there is no possible room for improvement and no new paradigms to be explored.

<snip>
No, that's not what we're saying at all. Quite the contrary. What we're (or at least I'm) saying is that game development has shifted away from the PC to console development.

Consoles have fixed graphical capabilities and usually have a longer life span that your average PC hardware. This is a problem, because when developing for consoles with their limited capabilities there's no incentive to invest time and effort in pushing the envelope for PC hardware.

Ergo, as things stand, you won't need state-of-the-art graphics hardware on the PC to play a simple console port. Which is what most games these days are.
kornedbeefy 2nd April 2010, 12:58 Quote
Quote:
Originally Posted by azrael-
Quote:
Originally Posted by dogknees
Many of you seem to be saying that games have reached their ultimate potential. That there is no possible room for improvement and no new paradigms to be explored.

<snip>
No, that's not what we're saying at all. Quite the contrary. What we're (or at least I'm) saying is that game development has shifted away from the PC to console development.

Consoles have fixed graphical capabilities and usually have a longer life span that your average PC hardware. This is a problem, because when developing for consoles with their limited capabilities there's no incentive to invest time and effort in pushing the envelope for PC hardware.

Ergo, as things stand, you won't need state-of-the-art graphics hardware on the PC to play a simple console port. Which is what most games these days are.

Maybe ATI and/or Nvidia need to promote PC gaming? How about some commercials on TV? PC gaming could use a champion, it only has it's fans.

Eventually some devs are going to wake up and tap into top of the line PC hardware. The console market is getting oversaturated IMHO. They will have to advance the games physics and graphics if they want to survive and the PC will be their vehicle.
Farfalho 2nd April 2010, 20:02 Quote
Quote:
Originally Posted by dec


Currently its this:
5970 = 2 5870's on the same PCB
5870 = 20 processor clusters (right name) enabled
5850 = 18 clusters enabled.
5830 = 16 or something.

snip

As matter of fact, 5970 = 2 5850's, not 2 5870. That, probably, will be a 5980 or 5990 if they get their 2 core graphics under the 5900 label. We're yet to see an ATi 2x5870.
shaffaaf27 4th April 2010, 20:49 Quote
Quote:
Originally Posted by Farfalho
Quote:
Originally Posted by dec


Currently its this:
5970 = 2 5870's on the same PCB
5870 = 20 processor clusters (right name) enabled
5850 = 18 clusters enabled.
5830 = 16 or something.

snip

As matter of fact, 5970 = 2 5850's, not 2 5870. That, probably, will be a 5980 or 5990 if they get their 2 core graphics under the 5900 label. We're yet to see an ATi 2x5870.
no the performace is of 2 5850s, but the GPUs are 2 5870s. both with 1600SP, where as the 5850s have 1440 SP.
LightningPete 6th April 2010, 23:20 Quote
i think the software and hardware developers need to work more closely together. I mean how cant a massive spec-ed 5000 ati series and 400 series nvidia not play crysis and ARMA2 without the critical frame rate drops in detailed and action packed areas respectively. I think there is something wrong with how GPU drivers work with software developers coding and methods... Time for improved communication, perhaps install a fibre optic phone line between nvidia and EA for example?
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums
Mionix Nash 20 Review

Mionix Nash 20 Review

30th July 2014

Mionix Naos 8200 Review

Mionix Naos 8200 Review

24th July 2014

MSI Z97S SLI Plus Review

MSI Z97S SLI Plus Review

23rd July 2014