bit-tech.net

Rumour: AMD Radeon HD 7000 supports PCI-E 3

Rumour: AMD Radeon HD 7000 supports PCI-E 3

Will more bandwidth mean more speed? We'll have to wait a few months to find out.

Turkish tech site Donanim Haber claims to have received some information about the forthcoming Radeon HD 7000 series of GPUs, including news that the chips will support the PCI-E 3 standard.

The inclusion of PCI-E 3 support isn’t a huge surprise – MSI has already announced that its Z68A-GD80 (G3) motherboard will support the standard, and Intel’s Ivy Bridge chipset is rumoured to incorporate PCI-E 3 support too.

The PCI-E 3 spec was finalised in November 2010, and it essentially doubles the bandwidth per lane, while remaining backwards compatible with previous generation PCI-E connections.

However, the new spec doesn’t increase the power ceiling, instead just consolidating the existing 150W and 300W standards, despite the fact that some of today's graphics cards already break the 300W limit.

We expect to see the next generation of graphics cards this autumn, so we’ll have to wait a few months to find out what difference is made by PCI-E 3 support.

Do you think the extra bandwidth will net you any more extra performance? Is it wise to keep the 300W power limit, or is this an oversight? Let us know your thoughts in the forums.

46 Comments

Discuss in the forums Reply
bowman 29th July 2011, 13:43 Quote
Um.

This is like platter-based hard drives supporting SATA 3.

Neat, but pointless.
misterd77 29th July 2011, 13:45 Quote
500 watt gpu's ?
Tattysnuc 29th July 2011, 13:58 Quote
With the way that Power supplies have evolved, I don't believe that the motherboard will be able to ever power the top GPU models, There must be all sorts of issues with running such high power over printed circuit boards on tracks - how much can be run before you get problems with arcing, or pin burnout etc. Makes sense to keep them isolated and use the 80/20 rule. Surely 80% of GPU's around the world are covered by this power envelope, especially bearing in mind that most boards now have an "APU" to misuse AMD's acronym...
TAG 29th July 2011, 14:03 Quote
They're skipping 32nm and going straight to 28nm.
Next gen GPUs will hopefully use a lot less power.
Goty 29th July 2011, 14:23 Quote
Quote:
Originally Posted by TAG
They're skipping 32nm and going straight to 28nm.
Next gen GPUs will hopefully use a lot less power.

Why use less power when you can increase performance. ;)
BrightCandle 29th July 2011, 14:26 Quote
300W is not unreasonable for a GPU maximum. There comes a point where its impossible to cool it even with the additional slot they currently use. Noise on the 300W cards is higher than is reasonable.
Evildead666 29th July 2011, 15:31 Quote
300W is probably the max that should be allowed for Graphics cards anyway.
PCIe 3 will mean better multi-card support, and higher upload/download bandwidth for those cards that may speak directly to the CPU.
azazel1024 29th July 2011, 15:41 Quote
I certainly don't think this is pointless. For actual graphics card performance I don't think it'll do a lick of good.

HOWEVER, for the Bus itself and small add on cards it'll be enormous. A lot of current SATA3 and USB3 add-on cards are limited by being connected through a PCI-e 2 x1 lane. Bump that to PCI-e 3 and you have enough bandwidth to saturate SATA3 and USB3. Also you now are able to get 4 port GbE NICs on a single lane, 10GbE NICs on a single lane. Go with Intels suggested/recommended new x2 lane setup and you have the equivelent of a full on PCI-e 2 x4 lane slot for things like 4 port RAID cards, etc. You could also start implementing new graphics cards standards that run off a lowly x4 PCI-e port if you wanted to (most lower end cards currently don't even really saturate a x8 PCI-e slot).

You also have twice the total bandwidth on the bus. Especially useful for entry level Intel CPU/chipsets that only support x16 lanes of PCI-e. You use most of that with a single high end discrete GPU. Obviously you aren't going to be using much of that bandwidth for other things when you are gaming, but think of a couple high end discrete GPUs in crossfire, combined with a GbE NIC (or worse a 10GbE NIC) passing large amounts of info and a x4 RAID card making a backup over the network. You are going to be mashing a god awful amount of data through a "restrictive" bus.
Hakuren 29th July 2011, 15:49 Quote
PCI-Ex 3.0 yes, it is useful now, but certainly not for VGAs. Not one, currently available, graphic card can saturate x16 slot. Today VGAs are barely capable of saturating x8 slot (and that only for top of the line products). So Gen.3 x16 slot is way too soon for graphics.

Storage that is another matter. It is - pretty much - the only major branch of IT industry which will benefit from more bandwidth introduced with Gen.3. It will simplify manufacturing process by reducing number of x16 slots. x8 slot will provide as much bandwidth. And for entry level servers and workstations I can see moving back to x4 slot which will provide same amount of bandwidth as Gen.2 x8.

300W limit. Hmm that is a tough one. I would gladly see limit reduced. There is no hope in hell to see any major breakthrough in the way modern PC is created. Of course as everybody I would love to see CPU with power of 30x i7 or VGA with GTX580 performance multiplied by 30 and both draining 5W. But that won't happen for next 50 years or so. And at that point I won't care too much about PCs! :D Anyway, I think there should be tendency to create bigger PSU to feed power directly instead stressing motherboard circuitry.
schmidtbag 29th July 2011, 15:52 Quote
Quote:
Originally Posted by bowman
Um.

This is like platter-based hard drives supporting SATA 3.

Neat, but pointless.

I completely agree. If there are any devices that actually use 100% of the PCIE 2 bandwidth then there's probably less than 3 of those products altogether. I would be interested to see how today's top-end GPUs and SSDs perform on pcie 3 but I'm sure it's probably going to be a 1FPS difference.

As for PCIE 3 not upping the wattage amount - good. Both AMD and Nvidia should not have made cards that exceed that limit. I think it's stupid that they're trying to make these behemoth cards that are so big they can't fit in the average case, spew so much heat that you need to water cooling them, probably consume more power than all of your other electronic devices combined, and in games gives you far more FPS than physically possible to notice.

The power limitation on PCIE helps keep practical products.
the_kille4 29th July 2011, 16:51 Quote
I would love to see an affordable way of having SSDs in PCI-E slots... because the current ones are usually meant for servers and using the extra bandwith can definately help their performance
edzieba 29th July 2011, 17:29 Quote
Question: Is 300W the maximum bus power draw (i.e; how much you can draw via the card edge connector before needing additional power connectors), or the rated total power draw (i.e; if you draw more than this amount of power, from all sources to a single card, that card cannot be certified as "PCI-E" compliant)?
The former seems unlikely, but is the latter really an issue? Other than not being able to write PCI-E anywhere on your box, documentation, website, etc (I'm sure some creative workarounds could be found), is there something else that prevents manufacturers from breaking this part of the spec? Some sort of patent agreement whereby non-compliance with any part opens you up to being sued for even using something that is partially compatible with PCI-E?
borandi 29th July 2011, 17:52 Quote
Quote:
Originally Posted by mingemuncher
500 watt gpu's ?

There were two 600W GPUs on show at Computex, one of which was definitely being put into production.
Psy-UK 29th July 2011, 18:32 Quote
Glad they've kept the power limit. Stops things getting silly and out of hand.
HourBeforeDawn 29th July 2011, 18:39 Quote
sure it wont use what is offered in PCI-E 3.0 but it will help push the way for other devices that will.
law99 29th July 2011, 19:28 Quote
it would be interesting if they just threw powerconsumption to the wind for a single product. But otherwise I'm pleased they want to keep sensible limits.
Action_Parsnip 29th July 2011, 19:37 Quote
Quote:
Originally Posted by TAG
They're skipping 32nm and going straight to 28nm.
Next gen GPUs will hopefully use a lot less power.

EVERY OTHER process change ever says your wrong.
TAG 29th July 2011, 19:41 Quote
Quote:
Originally Posted by Action_Parsnip
EVERY OTHER process change ever says your wrong.

40nm saw vapor chambers getting the norm.
They pretty much ran out of better ways to cool smaller surfaces.

I'm really curious as to what's gonna happen next. What will be used to increase cooling power? Artificially make chips larger by compartimentalising them into blocks deliberately spread appart from each other?
TAG 29th July 2011, 19:49 Quote
Quote:
Originally Posted by Action_Parsnip
EVERY OTHER process change ever says your wrong.


Also, look at laptops.
They got more powerful AND use less power.
Goty 29th July 2011, 19:58 Quote
[QUOTE=TAG]
Quote:
Originally Posted by Action_Parsnip
I'm really curious as to what's gonna happen next. What will be used to increase cooling power? Artificially make chips larger by compartimentalising them into blocks deliberately spread appart from each other?

That wouldn't really do any good since the density of transistors wouldn't decrease.
Sloth 29th July 2011, 20:02 Quote
Someone correct me if I'm wrong, but in situations where certain chipsets can only support a set number of PCI-E lanes wouldn't this allow motherboard manufacturers to setup boards for SLI/CF using two x8 PCI-E 3 slots and still get the same performance as using two x16 PCI-E 2 slots? If so, seems like a pretty good upgrade even if current cards can't saturate 16 lanes.
TAG 29th July 2011, 20:06 Quote
Quote:
Originally Posted by Goty


That wouldn't really do any good since the density of transistors wouldn't decrease.

Then how is crossfire/SLI working?
I'm not talking spreading transistors individually, but spreading blocks of them. In the scale of maybe cutting a chip in 4 and spreading it over an area twice its size.
thehippoz 29th July 2011, 20:28 Quote
Quote:
Originally Posted by Sloth
Someone correct me if I'm wrong, but in situations where certain chipsets can only support a set number of PCI-E lanes wouldn't this allow motherboard manufacturers to setup boards for SLI/CF using two x8 PCI-E 3 slots and still get the same performance as using two x16 PCI-E 2 slots? If so, seems like a pretty good upgrade even if current cards can't saturate 16 lanes.

yeah not shabby.. double the bandwidth and still within spec of current power supplies
Action_Parsnip 29th July 2011, 23:46 Quote
Quote:
Originally Posted by TAG
40nm saw vapor chambers getting the norm.
They pretty much ran out of better ways to cool smaller surfaces.

I was quoting this: "They're skipping 32nm and going straight to 28nm.
Next gen GPUs will hopefully use a lot less power." New process has never really meant a trend of falling power consumption for GPUs
Quote:
I'm really curious as to what's gonna happen next. What will be used to increase cooling power? Artificially make chips larger by compartimentalising them into blocks deliberately spread appart from each other?

I never wrote this, this is someone else's quote.
Quote:
Originally Posted by TAG
Also, look at laptops.
They got more powerful AND use less power.

I was quoting this: "They're skipping 32nm and going straight to 28nm.
Next gen GPUs will hopefully use a lot less power." New process has never really meant a trend of falling power consumption for GPUs
Wwhat 30th July 2011, 03:00 Quote
If you try to squeeze more power through a PCIE connector you'd need to use more pins, breaking compatibility unless you do a secondary edge connector at the rear of the PCIE connector, and the motherboard would get too hot too unless you use multiple traces, but seeing that would be undoable to have it run clear across the motherboard you'd need power connectors on it near the slots, at which point it seems much simpler to just use the current system of having connectors on the graphicscards.
The placement and design of those is an open discussion though I'd say.
fluxtatic 30th July 2011, 07:26 Quote
Quote:
Originally Posted by edzieba
Question: Is 300W the maximum bus power draw (i.e; how much you can draw via the card edge connector before needing additional power connectors), or the rated total power draw (i.e; if you draw more than this amount of power, from all sources to a single card, that card cannot be certified as "PCI-E" compliant)?
The former seems unlikely, but is the latter really an issue? Other than not being able to write PCI-E anywhere on your box, documentation, website, etc (I'm sure some creative workarounds could be found), is there something else that prevents manufacturers from breaking this part of the spec? Some sort of patent agreement whereby non-compliance with any part opens you up to being sued for even using something that is partially compatible with PCI-E?

300W is the total power draw. The max allowed by the spec for the PCI-E connector itself is 75W. Which follows what Tattysnuc already said - so the limit is likely a lot lower than he realized. And I would take a guess that that is the limitation - you can't suck too much power through traces on a board before things start catching on fire ;)

Does it make sense? I think so. After replacing my water heater with a much more efficient model, I felt justified in getting a video card that takes a single 6-pin connector. That is, power draw is one of the things that limits me in what hardware I get, to some degree. It doesn't make sense to me to run a machine drawing nearly a kilowatt from the wall (extreme example, but they do exist) just because I want my shadows to be extra shadowy and my explosions extra explodey when I'm gaming. I'm not passing judgment on that, though - if that's what you dig, go for it. For me, though, give me a card with the power draw of the 5770 with the capabilities of the GTX570 and that would be all I need for a good long while. Not that it's likely to happen anytime soon.
dyzophoria 30th July 2011, 12:11 Quote
honestly 500 watt gpus?, and nobody is worried about their monthly electricity bill? lol, I hope they can find ways to have more performance but less power consumption. imagine this you have to technically re-wire house just to get the next generation of computers running. that is just impractical
TAG 30th July 2011, 12:31 Quote
Wouldn't a watercooled, overclocked GTX590 hit 500W already?
fingerbob69 30th July 2011, 12:33 Quote
"... give me a card with the power draw of the 5770 with the capabilities of the GTX570 and that would be all I need for a good long while. Not that it's likely to happen anytime soon."

nVidia cards of late seem to have much higher power draws, so you could be waiting an awful long time there.

You might not have so long to wait to get 6950 performance for a 5770 power draw though: currently the same idle and within 18.5% at load.
TAG 30th July 2011, 12:42 Quote
Not sure where you're getting your numbers from

guru3d finds the following
6950: 158W
5570: 93W +20W (idle)=113W

From these numbers we find the 6950 uses 40% more power than the 5770, or the 5770 uses 28% less power than a 6950.
Evildead666 30th July 2011, 13:40 Quote
Quote:
Originally Posted by Sloth
Someone correct me if I'm wrong, but in situations where certain chipsets can only support a set number of PCI-E lanes wouldn't this allow motherboard manufacturers to setup boards for SLI/CF using two x8 PCI-E 3 slots and still get the same performance as using two x16 PCI-E 2 slots? If so, seems like a pretty good upgrade even if current cards can't saturate 16 lanes.

Yup, and boards could also do 4x PCIe3 x8 which would be the equivalent of 4x PCIe2 x16.

I think it basically means we should start seeing motherboards with all PCIe3 x16 slots, but some running at 8x speed, and the GPU ones at 16x speed. Nothing but full length slots though.
PCI is dead and buried.
west 31st July 2011, 07:56 Quote
"PCI is dead and buried."

I don't think so. Look at any consumer motherboard.
Elledan 31st July 2011, 11:05 Quote
I'm waiting for Intel's 3D transistor technology to make it into GPUs. Now that should knock down the power usage something seriously :)

*keeps waiting for Ivy Bridge to be released*
Farting Bob 31st July 2011, 17:23 Quote
Quote:
Originally Posted by TAG
Wouldn't a watercooled, overclocked GTX590 hit 500W already?

A reference 590 has a TDP of 365w, which is out of spec with PCIe but in reality, if you have the PSU to handle it then its not going to break anything. But even with a crazy WC overclock you'd unlikely hit 500w. Thats liqued nitrogen territory i suspect.
TAG 31st July 2011, 17:48 Quote
Quote:
Originally Posted by Farting Bob
A reference 590 has a TDP of 365w, which is out of spec with PCIe but in reality, if you have the PSU to handle it then its not going to break anything. But even with a crazy WC overclock you'd unlikely hit 500w. Thats liqued nitrogen territory i suspect.

I doubt that's LN2 territory. 2x standard clocked GTX580 would use in excess of 500W
Overclocking a GTX590 to GTX580 clocks wouldn't require LN2 now would it?

Its limits are in it's power circuitry, not in the cooling.
This is already almost hitting 500w aircooled.
Considering how a 570 uses 213W, I'd say this overclocked 590 uses 457w
play_boy_2000 31st July 2011, 18:30 Quote
Quote:
Originally Posted by west
"PCI is dead and buried."

I don't think so. Look at any consumer motherboard.
Sadly true. I just had a quick peek at my local computer store as well as newegg; In some catagories the number devices availble in PCI still outweigh PCIe by as much as 8:1, with only non-raid SATA addon cards exceeding a 1:1 ratio (USB is close, SATA/SAS RAID cards dominate the other way).
slothy89 2nd August 2011, 05:36 Quote
Quote:
Originally Posted by TAG
Quote:
Originally Posted by Farting Bob
A reference 590 has a TDP of 365w, which is out of spec with PCIe but in reality, if you have the PSU to handle it then its not going to break anything. But even with a crazy WC overclock you'd unlikely hit 500w. Thats liqued nitrogen territory i suspect.

I doubt that's LN2 territory. 2x standard clocked GTX580 would use in excess of 500W
Overclocking a GTX590 to GTX580 clocks wouldn't require LN2 now would it?

Its limits are in it's power circuitry, not in the cooling.
This is already almost hitting 500w aircooled.

Of course 2 gtx580s would exceed 500w.. Combined..

These numbers are on a per card basis.. Plus the 590 is not two full 580s on the one pcb, many components are shared. Hence the lower power ceiling. *rolls eyes*

The suggested possibility of higher bandwidth storage cards (raid) or half lane count equal bandwidth graphics seems a more reasonable outcome. Even the gtx580 only sees very minor improvement on x16 over x8.

In 5 years we might have a real need for more than x16 pcie2 but for the near future it's enough.
Haphestus 2nd August 2011, 09:12 Quote
Quote:
PCI-Ex 3.0 yes, it is useful now, but certainly not for VGAs. Not one, currently available, graphic card can saturate x16 slot. Today VGAs are barely capable of saturating x8 slot (and that only for top of the line products). So Gen.3 x16 slot is way too soon for graphics.

Storage that is another matter. It is - pretty much - the only major branch of IT industry which will benefit from more bandwidth introduced with Gen.3. It will simplify manufacturing process by reducing number of x16 slots. x8 slot will provide as much bandwidth. And for entry level servers and workstations I can see moving back to x4 slot which will provide same amount of bandwidth as Gen.2 x8.

300W limit. Hmm that is a tough one. I would gladly see limit reduced. There is no hope in hell to see any major breakthrough in the way modern PC is created. Of course as everybody I would love to see CPU with power of 30x i7 or VGA with GTX580 performance multiplied by 30 and both draining 5W. But that won't happen for next 50 years or so. And at that point I won't care too much about PCs! :D Anyway, I think there should be tendency to create bigger PSU to feed power directly instead stressing motherboard circuitry.

50 years.........try 5-10 :D
TAG 2nd August 2011, 09:44 Quote
Quote:
Originally Posted by slothy89
Of course 2 gtx580s would exceed 500w.. Combined..

These numbers are on a per card basis.. Plus the 590 is not two full 580s on the one pcb, many components are shared. Hence the lower power ceiling. *rolls eyes*

457W ...
Close enough
TAG 2nd August 2011, 16:47 Quote
Asus Mars II
a true dual GTX580
http://www.techpowerup.com/146588/ASUS-MARS-II-Graphics-Card-Pictured.html

600W of insanity

official pics here
GaMEChld 10th August 2011, 06:53 Quote
The real benefit for this is the increased bandwidth for ALL related devices, not just video cards. The SATA ports, USB, Ethernet, as well as the traditional PCI-e slots all use PCI-e lanes to route data through southbridge and whatnot. This will hopefully allow for a maximal amount of SATA connections as well as robust PCI-e arrangement with little concern for bandwidth limitations. Now maybe we can have 10 SATA III ports with 5 PCI-e x16 slots (running in some combination of x16, x8, x4) and have that be the norm for even low-middle range boards. Additionally, with that bandwidth, we can probably get away with a video card in a slot running at x4 speed.
TAG 19th August 2011, 14:41 Quote
Here's a >500w videocard
Haphestus 19th August 2011, 15:18 Quote
Whoa !! and Baby!! spring to mind.

You need a fusion reactor to power this mutha :D

I guess you will need a small mortgage to buy one :( (now where did i put my house deeds......................)
rollo 9th September 2011, 13:42 Quote
at the current rate of performance

5-10 years you will be able to buy a cpu that outperforms the current sandbybridge by 50 times same with gpus

look at the old 5 series from nvidia its not as old as people think
Ahadihunter1 3rd November 2011, 14:26 Quote
Quote:
Originally Posted by borandi
Quote:
Originally Posted by mingemuncher
500 watt gpu's ?

There were two 600W GPUs on show at Computex, one of which was definitely being put into production.

LOLOLOLOLOLOLLOLOLLL WHAT??? THATS HIGHER THAN MY IRON....
Ahadihunter1 3rd November 2011, 14:27 Quote
Quote:
Originally Posted by Ahadihunter1
Quote:
Originally Posted by borandi
Quote:
Originally Posted by mingemuncher
500 watt gpu's ?

There were two 600W GPUs on show at Computex, one of which was definitely being put into production.

LOLOLOLOLOLOLLOLOLLL WHAT??? THATS HIGHER THAN MY IRON....
Screw it dude.. I'm sticking on the low tech lol.
SCREW
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums